Jan 24 00:23:42.141262 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:23:42.141379 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:23:42.141403 kernel: BIOS-provided physical RAM map: Jan 24 00:23:42.141414 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:23:42.141424 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 24 00:23:42.141433 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 24 00:23:42.141445 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 24 00:23:42.141455 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 24 00:23:42.141466 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 24 00:23:42.141477 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 24 00:23:42.141498 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 24 00:23:42.141509 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 24 00:23:42.141555 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 24 00:23:42.141568 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 24 00:23:42.141617 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 24 00:23:42.141631 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 24 00:23:42.141649 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 24 00:23:42.141659 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 24 00:23:42.141671 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 24 00:23:42.141751 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:23:42.141763 kernel: NX (Execute Disable) protection: active Jan 24 00:23:42.141774 kernel: APIC: Static calls initialized Jan 24 00:23:42.141785 kernel: efi: EFI v2.7 by EDK II Jan 24 00:23:42.141796 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 24 00:23:42.141807 kernel: SMBIOS 2.8 present. Jan 24 00:23:42.141818 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 24 00:23:42.141828 kernel: Hypervisor detected: KVM Jan 24 00:23:42.141847 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:23:42.141858 kernel: kvm-clock: using sched offset of 12172233899 cycles Jan 24 00:23:42.141871 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:23:42.141882 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:23:42.141893 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:23:42.141904 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:23:42.141916 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 24 00:23:42.141928 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:23:42.141939 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:23:42.141957 kernel: Using GB pages for direct mapping Jan 24 00:23:42.141967 kernel: Secure boot disabled Jan 24 00:23:42.141979 kernel: ACPI: Early table checksum verification disabled Jan 24 00:23:42.141990 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 24 00:23:42.142009 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:23:42.142022 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:23:42.142034 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:23:42.142051 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 24 00:23:42.142063 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:23:42.142152 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:23:42.142168 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:23:42.142178 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:23:42.142191 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:23:42.142202 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 24 00:23:42.142221 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 24 00:23:42.142233 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 24 00:23:42.142245 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 24 00:23:42.142257 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 24 00:23:42.142268 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 24 00:23:42.142281 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 24 00:23:42.142292 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 24 00:23:42.142304 kernel: No NUMA configuration found Jan 24 00:23:42.142350 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 24 00:23:42.142371 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 24 00:23:42.142383 kernel: Zone ranges: Jan 24 00:23:42.142395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:23:42.142406 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 24 00:23:42.142418 kernel: Normal empty Jan 24 00:23:42.142431 kernel: Movable zone start for each node Jan 24 00:23:42.142443 kernel: Early memory node ranges Jan 24 00:23:42.142456 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:23:42.142469 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 24 00:23:42.142487 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 24 00:23:42.142500 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 24 00:23:42.142511 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 24 00:23:42.142523 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 24 00:23:42.142594 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 24 00:23:42.142609 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:23:42.142621 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:23:42.142633 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 24 00:23:42.142646 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:23:42.142658 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 24 00:23:42.142765 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:23:42.142780 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 24 00:23:42.142793 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:23:42.142832 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:23:42.142843 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:23:42.142879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:23:42.142891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:23:42.142902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:23:42.142913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:23:42.142930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:23:42.142941 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:23:42.142951 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:23:42.142962 kernel: TSC deadline timer available Jan 24 00:23:42.142972 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:23:42.142983 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:23:42.142994 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:23:42.143005 kernel: kvm-guest: setup PV sched yield Jan 24 00:23:42.143016 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 24 00:23:42.143031 kernel: Booting paravirtualized kernel on KVM Jan 24 00:23:42.143042 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:23:42.143053 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:23:42.143111 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:23:42.143124 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:23:42.143135 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:23:42.143146 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:23:42.143157 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:23:42.143170 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:23:42.143215 kernel: random: crng init done Jan 24 00:23:42.143227 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:23:42.143238 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:23:42.143248 kernel: Fallback order for Node 0: 0 Jan 24 00:23:42.143257 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 24 00:23:42.143267 kernel: Policy zone: DMA32 Jan 24 00:23:42.143278 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:23:42.143289 kernel: Memory: 2400612K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166128K reserved, 0K cma-reserved) Jan 24 00:23:42.143305 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:23:42.143316 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:23:42.143328 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:23:42.143340 kernel: Dynamic Preempt: voluntary Jan 24 00:23:42.143350 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:23:42.143386 kernel: rcu: RCU event tracing is enabled. Jan 24 00:23:42.143402 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:23:42.143414 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:23:42.143425 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:23:42.143438 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:23:42.143451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:23:42.143464 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:23:42.143485 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:23:42.143498 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:23:42.143511 kernel: Console: colour dummy device 80x25 Jan 24 00:23:42.143522 kernel: printk: console [ttyS0] enabled Jan 24 00:23:42.143572 kernel: ACPI: Core revision 20230628 Jan 24 00:23:42.143595 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:23:42.143606 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:23:42.143619 kernel: x2apic enabled Jan 24 00:23:42.143631 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:23:42.143642 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:23:42.143654 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:23:42.143665 kernel: kvm-guest: setup PV IPIs Jan 24 00:23:42.143672 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:23:42.143751 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:23:42.143772 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:23:42.143783 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:23:42.143795 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:23:42.143802 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:23:42.143809 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:23:42.143816 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:23:42.143823 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:23:42.143830 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:23:42.143837 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:23:42.143849 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:23:42.143856 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:23:42.143863 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:23:42.143870 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:23:42.143906 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:23:42.143914 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:23:42.143921 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:23:42.143928 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:23:42.143939 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:23:42.143946 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:23:42.143953 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:23:42.143960 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:23:42.143967 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:23:42.143973 kernel: landlock: Up and running. Jan 24 00:23:42.143980 kernel: SELinux: Initializing. Jan 24 00:23:42.143987 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:23:42.143994 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:23:42.144005 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:23:42.144012 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:23:42.144019 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:23:42.144026 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:23:42.144033 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:23:42.144040 kernel: signal: max sigframe size: 1776 Jan 24 00:23:42.144047 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:23:42.144055 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:23:42.144062 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:23:42.144129 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:23:42.144143 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:23:42.144155 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:23:42.144166 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:23:42.144178 kernel: smpboot: Max logical packages: 1 Jan 24 00:23:42.144189 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:23:42.144201 kernel: devtmpfs: initialized Jan 24 00:23:42.144212 kernel: x86/mm: Memory block size: 128MB Jan 24 00:23:42.144224 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 24 00:23:42.144241 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 24 00:23:42.144254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 24 00:23:42.144267 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 24 00:23:42.144281 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 24 00:23:42.144294 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:23:42.144307 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:23:42.144319 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:23:42.144330 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:23:42.144341 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:23:42.144358 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:23:42.144369 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:23:42.144381 kernel: audit: type=2000 audit(1769214215.931:1): state=initialized audit_enabled=0 res=1 Jan 24 00:23:42.144394 kernel: cpuidle: using governor menu Jan 24 00:23:42.144406 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:23:42.144419 kernel: dca service started, version 1.12.1 Jan 24 00:23:42.144431 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:23:42.144442 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:23:42.144458 kernel: PCI: Using configuration type 1 for base access Jan 24 00:23:42.144470 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:23:42.144482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:23:42.144495 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:23:42.144509 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:23:42.144523 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:23:42.144536 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:23:42.144548 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:23:42.144561 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:23:42.144581 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:23:42.144594 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:23:42.144607 kernel: ACPI: Interpreter enabled Jan 24 00:23:42.144620 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:23:42.144632 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:23:42.144644 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:23:42.144657 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:23:42.144669 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:23:42.144750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:23:42.145353 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:23:42.145617 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:23:42.145921 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:23:42.145942 kernel: PCI host bridge to bus 0000:00 Jan 24 00:23:42.146321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:23:42.146523 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:23:42.146801 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:23:42.147011 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:23:42.147263 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:23:42.147464 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 24 00:23:42.147664 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:23:42.148198 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:23:42.148544 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:23:42.148849 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 24 00:23:42.149160 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 24 00:23:42.149382 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:23:42.149615 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:23:42.149931 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:23:42.150449 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:23:42.150772 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 24 00:23:42.151027 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 24 00:23:42.151316 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 24 00:23:42.151569 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:23:42.151856 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 24 00:23:42.152132 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 24 00:23:42.152333 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 24 00:23:42.152740 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:23:42.153051 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 24 00:23:42.153337 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 24 00:23:42.153601 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 24 00:23:42.153885 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 24 00:23:42.154226 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:23:42.154443 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:23:42.154795 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:23:42.155029 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 24 00:23:42.155406 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 24 00:23:42.155747 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:23:42.155960 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 24 00:23:42.155980 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:23:42.155995 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:23:42.156008 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:23:42.156031 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:23:42.156043 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:23:42.156054 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:23:42.156111 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:23:42.156125 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:23:42.156137 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:23:42.156148 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:23:42.156160 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:23:42.156173 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:23:42.156193 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:23:42.156207 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:23:42.156220 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:23:42.156232 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:23:42.156244 kernel: iommu: Default domain type: Translated Jan 24 00:23:42.156258 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:23:42.156270 kernel: efivars: Registered efivars operations Jan 24 00:23:42.156284 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:23:42.156297 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:23:42.156314 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 24 00:23:42.156328 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 24 00:23:42.156339 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 24 00:23:42.156350 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 24 00:23:42.156648 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:23:42.156933 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:23:42.157195 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:23:42.157214 kernel: vgaarb: loaded Jan 24 00:23:42.157233 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:23:42.157245 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:23:42.157257 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:23:42.157269 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:23:42.157282 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:23:42.157294 kernel: pnp: PnP ACPI init Jan 24 00:23:42.157734 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:23:42.157755 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:23:42.157767 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:23:42.157786 kernel: NET: Registered PF_INET protocol family Jan 24 00:23:42.157797 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:23:42.157810 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:23:42.157822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:23:42.157834 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:23:42.157846 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:23:42.157858 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:23:42.157869 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:23:42.157888 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:23:42.157900 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:23:42.157914 kernel: NET: Registered PF_XDP protocol family Jan 24 00:23:42.158186 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 24 00:23:42.158408 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 24 00:23:42.158610 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:23:42.158869 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:23:42.159132 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:23:42.159348 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:23:42.159562 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:23:42.159850 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 24 00:23:42.159869 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:23:42.159883 kernel: Initialise system trusted keyrings Jan 24 00:23:42.159895 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:23:42.159909 kernel: Key type asymmetric registered Jan 24 00:23:42.159924 kernel: Asymmetric key parser 'x509' registered Jan 24 00:23:42.159938 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:23:42.159960 kernel: io scheduler mq-deadline registered Jan 24 00:23:42.159971 kernel: io scheduler kyber registered Jan 24 00:23:42.159983 kernel: io scheduler bfq registered Jan 24 00:23:42.159996 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:23:42.160010 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:23:42.160023 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:23:42.160035 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:23:42.160048 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:23:42.160062 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:23:42.160133 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:23:42.160147 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:23:42.160161 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:23:42.160617 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:23:42.160889 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:23:42.160912 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:23:42.161152 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:23:40 UTC (1769214220) Jan 24 00:23:42.161348 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:23:42.161371 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:23:42.161383 kernel: efifb: probing for efifb Jan 24 00:23:42.161396 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 24 00:23:42.161407 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 24 00:23:42.161419 kernel: efifb: scrolling: redraw Jan 24 00:23:42.161430 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 24 00:23:42.161442 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:23:42.161453 kernel: fb0: EFI VGA frame buffer device Jan 24 00:23:42.161464 kernel: pstore: Using crash dump compression: deflate Jan 24 00:23:42.161482 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:23:42.161493 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:23:42.161505 kernel: Segment Routing with IPv6 Jan 24 00:23:42.161517 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:23:42.161528 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:23:42.161540 kernel: Key type dns_resolver registered Jan 24 00:23:42.161552 kernel: IPI shorthand broadcast: enabled Jan 24 00:23:42.161592 kernel: sched_clock: Marking stable (4534136025, 539317079)->(6470076694, -1396623590) Jan 24 00:23:42.161609 kernel: registered taskstats version 1 Jan 24 00:23:42.161625 kernel: Loading compiled-in X.509 certificates Jan 24 00:23:42.161638 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:23:42.161650 kernel: Key type .fscrypt registered Jan 24 00:23:42.161662 kernel: Key type fscrypt-provisioning registered Jan 24 00:23:42.161741 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:23:42.161757 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:23:42.161769 kernel: ima: No architecture policies found Jan 24 00:23:42.161782 kernel: clk: Disabling unused clocks Jan 24 00:23:42.161794 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:23:42.161812 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:23:42.161825 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:23:42.161837 kernel: Run /init as init process Jan 24 00:23:42.161849 kernel: with arguments: Jan 24 00:23:42.161861 kernel: /init Jan 24 00:23:42.161873 kernel: with environment: Jan 24 00:23:42.161887 kernel: HOME=/ Jan 24 00:23:42.161898 kernel: TERM=linux Jan 24 00:23:42.161951 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:23:42.161974 systemd[1]: Detected virtualization kvm. Jan 24 00:23:42.161989 systemd[1]: Detected architecture x86-64. Jan 24 00:23:42.162003 systemd[1]: Running in initrd. Jan 24 00:23:42.162015 systemd[1]: No hostname configured, using default hostname. Jan 24 00:23:42.162028 systemd[1]: Hostname set to . Jan 24 00:23:42.162041 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:23:42.162058 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:23:42.162119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:23:42.162133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:23:42.162147 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:23:42.162160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:23:42.162173 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:23:42.162192 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:23:42.162208 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:23:42.162221 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:23:42.162234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:23:42.162246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:23:42.162259 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:23:42.162276 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:23:42.162289 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:23:42.162301 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:23:42.162314 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:23:42.162326 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:23:42.162338 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:23:42.162350 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:23:42.162362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:23:42.162374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:23:42.162391 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:23:42.162404 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:23:42.162417 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:23:42.162429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:23:42.162441 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:23:42.162454 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:23:42.162467 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:23:42.162480 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:23:42.162493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:23:42.162511 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:23:42.162564 systemd-journald[194]: Collecting audit messages is disabled. Jan 24 00:23:42.162593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:23:42.162611 systemd-journald[194]: Journal started Jan 24 00:23:42.162637 systemd-journald[194]: Runtime Journal (/run/log/journal/a24e68f718cd4a35bc03debc9f3fcfbe) is 6.0M, max 48.3M, 42.2M free. Jan 24 00:23:42.179105 systemd-modules-load[195]: Inserted module 'overlay' Jan 24 00:23:42.198557 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:23:42.211743 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:23:42.223150 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:23:42.228116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:23:42.248928 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:23:42.259148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:23:42.282868 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:23:42.306991 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:23:42.308030 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:23:42.321180 kernel: Bridge firewalling registered Jan 24 00:23:42.321184 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 24 00:23:42.340354 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:23:42.342123 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:23:42.342629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:23:42.414447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:23:42.420842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:23:42.461356 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:23:42.496198 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:23:42.507825 dracut-cmdline[226]: dracut-dracut-053 Jan 24 00:23:42.513205 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:23:42.551135 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:23:42.650147 systemd-resolved[253]: Positive Trust Anchors: Jan 24 00:23:42.651175 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:23:42.652862 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:23:42.662913 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 24 00:23:42.668208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:23:42.684872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:23:42.770865 kernel: SCSI subsystem initialized Jan 24 00:23:42.798280 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:23:42.824808 kernel: iscsi: registered transport (tcp) Jan 24 00:23:42.870487 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:23:42.870573 kernel: QLogic iSCSI HBA Driver Jan 24 00:23:43.009645 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:23:43.029254 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:23:43.097322 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:23:43.097803 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:23:43.107924 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:23:43.241205 kernel: raid6: avx2x4 gen() 18748 MB/s Jan 24 00:23:43.259428 kernel: raid6: avx2x2 gen() 20200 MB/s Jan 24 00:23:43.282239 kernel: raid6: avx2x1 gen() 14660 MB/s Jan 24 00:23:43.282877 kernel: raid6: using algorithm avx2x2 gen() 20200 MB/s Jan 24 00:23:43.301478 kernel: raid6: .... xor() 24407 MB/s, rmw enabled Jan 24 00:23:43.301570 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:23:43.337926 kernel: xor: automatically using best checksumming function avx Jan 24 00:23:43.616807 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:23:43.644581 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:23:43.665135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:23:43.694341 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 24 00:23:43.705352 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:23:43.724034 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:23:43.758034 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 24 00:23:43.839560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:23:43.861188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:23:44.053499 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:23:44.088202 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:23:44.110537 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:23:44.118258 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:23:44.121933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:23:44.125436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:23:44.148760 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:23:44.166915 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:23:44.167279 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:23:44.176161 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:23:44.183393 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:23:44.228280 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:23:44.228314 kernel: GPT:9289727 != 19775487 Jan 24 00:23:44.228330 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:23:44.228365 kernel: GPT:9289727 != 19775487 Jan 24 00:23:44.228379 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:23:44.228412 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:23:44.183506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:23:44.228644 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:23:44.245161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:23:44.245623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:23:44.246782 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:23:44.279857 kernel: libata version 3.00 loaded. Jan 24 00:23:44.280755 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:23:44.304772 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:23:44.309802 kernel: AES CTR mode by8 optimization enabled Jan 24 00:23:44.309953 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:23:44.310419 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:23:44.314159 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:23:44.329251 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:23:44.330984 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:23:44.341794 kernel: scsi host0: ahci Jan 24 00:23:44.345794 kernel: scsi host1: ahci Jan 24 00:23:44.349749 kernel: scsi host2: ahci Jan 24 00:23:44.352955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:23:44.373274 kernel: scsi host3: ahci Jan 24 00:23:44.373578 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (463) Jan 24 00:23:44.373603 kernel: scsi host4: ahci Jan 24 00:23:44.385798 kernel: scsi host5: ahci Jan 24 00:23:44.386216 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Jan 24 00:23:44.386237 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Jan 24 00:23:44.386253 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Jan 24 00:23:44.395409 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Jan 24 00:23:44.395470 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Jan 24 00:23:44.403887 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Jan 24 00:23:44.403978 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Jan 24 00:23:44.419532 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:23:44.435375 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:23:44.445189 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:23:44.447971 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:23:44.476601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:23:44.505414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:23:44.513936 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:23:44.532840 disk-uuid[559]: Primary Header is updated. Jan 24 00:23:44.532840 disk-uuid[559]: Secondary Entries is updated. Jan 24 00:23:44.532840 disk-uuid[559]: Secondary Header is updated. Jan 24 00:23:44.539803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:23:44.552804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:23:44.722137 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:23:44.722219 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:23:44.723757 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:23:44.726793 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:23:44.732317 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:23:44.732385 kernel: ata3.00: applying bridge limits Jan 24 00:23:44.735758 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:23:44.735801 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:23:44.738803 kernel: ata3.00: configured for UDMA/100 Jan 24 00:23:44.744878 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:23:44.804538 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:23:44.805253 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:23:44.819797 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:23:45.563795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:23:45.565023 disk-uuid[564]: The operation has completed successfully. Jan 24 00:23:45.614486 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:23:45.614820 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:23:45.673203 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:23:45.684546 sh[597]: Success Jan 24 00:23:45.701813 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:23:45.771552 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:23:45.797766 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:23:45.805400 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:23:45.833849 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:23:45.833984 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:23:45.834008 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:23:45.838909 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:23:45.842588 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:23:45.859943 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:23:45.868355 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:23:45.890053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:23:45.901025 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:23:45.927175 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:23:45.927327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:23:45.927350 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:23:45.935796 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:23:45.956743 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:23:45.965390 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:23:45.975010 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:23:45.986017 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:23:46.167906 ignition[681]: Ignition 2.19.0 Jan 24 00:23:46.167927 ignition[681]: Stage: fetch-offline Jan 24 00:23:46.176370 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:23:46.167983 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:23:46.167999 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:23:46.168292 ignition[681]: parsed url from cmdline: "" Jan 24 00:23:46.168299 ignition[681]: no config URL provided Jan 24 00:23:46.168309 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:23:46.168328 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:23:46.168369 ignition[681]: op(1): [started] loading QEMU firmware config module Jan 24 00:23:46.168379 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:23:46.186364 ignition[681]: op(1): [finished] loading QEMU firmware config module Jan 24 00:23:46.239202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:23:46.311483 systemd-networkd[785]: lo: Link UP Jan 24 00:23:46.311524 systemd-networkd[785]: lo: Gained carrier Jan 24 00:23:46.315462 systemd-networkd[785]: Enumeration completed Jan 24 00:23:46.315848 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:23:46.317018 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:23:46.317025 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:23:46.320865 systemd-networkd[785]: eth0: Link UP Jan 24 00:23:46.320873 systemd-networkd[785]: eth0: Gained carrier Jan 24 00:23:46.320887 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:23:46.347405 systemd[1]: Reached target network.target - Network. Jan 24 00:23:46.352870 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:23:46.490865 ignition[681]: parsing config with SHA512: ad83f9a8a2b01e3e3bf98679525b0e33f687e0cdeabd63f19bc4e4d4e8f758fc32cf072e82e698d0cf1c7e92e21d9725b5c5d84ef2c917b87326213c87a60af4 Jan 24 00:23:46.509597 unknown[681]: fetched base config from "system" Jan 24 00:23:46.509621 unknown[681]: fetched user config from "qemu" Jan 24 00:23:46.510323 ignition[681]: fetch-offline: fetch-offline passed Jan 24 00:23:46.510429 ignition[681]: Ignition finished successfully Jan 24 00:23:46.524253 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:23:46.524841 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:23:46.548211 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:23:46.579043 ignition[789]: Ignition 2.19.0 Jan 24 00:23:46.579085 ignition[789]: Stage: kargs Jan 24 00:23:46.579395 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:23:46.579414 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:23:46.580666 ignition[789]: kargs: kargs passed Jan 24 00:23:46.580807 ignition[789]: Ignition finished successfully Jan 24 00:23:46.599388 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:23:46.615050 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:23:46.646021 ignition[797]: Ignition 2.19.0 Jan 24 00:23:46.646052 ignition[797]: Stage: disks Jan 24 00:23:46.646400 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:23:46.650007 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:23:46.646415 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:23:46.655468 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:23:46.647748 ignition[797]: disks: disks passed Jan 24 00:23:46.661334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:23:46.647802 ignition[797]: Ignition finished successfully Jan 24 00:23:46.668044 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:23:46.671027 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:23:46.677004 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:23:46.702186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:23:46.723478 systemd-resolved[253]: Detected conflict on linux IN A 10.0.0.16 Jan 24 00:23:46.723526 systemd-resolved[253]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Jan 24 00:23:46.737220 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:23:46.748545 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:23:46.779020 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:23:47.004839 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:23:47.006593 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:23:47.012513 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:23:47.032157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:23:47.040983 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:23:47.046433 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:23:47.085401 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 24 00:23:47.085433 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:23:47.085445 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:23:47.085456 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:23:47.085466 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:23:47.046521 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:23:47.046569 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:23:47.061960 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:23:47.087501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:23:47.116005 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:23:47.182279 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:23:47.191189 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:23:47.198521 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:23:47.210443 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:23:47.379228 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:23:47.398996 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:23:47.408019 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:23:47.423286 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:23:47.429791 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:23:47.459250 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:23:47.482433 ignition[928]: INFO : Ignition 2.19.0 Jan 24 00:23:47.482433 ignition[928]: INFO : Stage: mount Jan 24 00:23:47.489367 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:23:47.489367 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:23:47.502099 ignition[928]: INFO : mount: mount passed Jan 24 00:23:47.506183 ignition[928]: INFO : Ignition finished successfully Jan 24 00:23:47.515415 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:23:47.532202 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:23:48.008615 systemd-networkd[785]: eth0: Gained IPv6LL Jan 24 00:23:48.024268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:23:48.044768 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 24 00:23:48.044852 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:23:48.044875 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:23:48.048226 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:23:48.059814 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:23:48.063507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:23:48.118768 ignition[958]: INFO : Ignition 2.19.0 Jan 24 00:23:48.118768 ignition[958]: INFO : Stage: files Jan 24 00:23:48.126243 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:23:48.126243 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:23:48.126243 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:23:48.126243 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:23:48.126243 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:23:48.152258 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:23:48.152258 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:23:48.152258 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:23:48.152258 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:23:48.152258 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:23:48.129620 unknown[958]: wrote ssh authorized keys file for user: core Jan 24 00:23:48.215471 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:23:48.334459 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:23:48.334459 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:23:48.353280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:23:48.842599 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:23:49.618121 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:23:49.618121 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:23:49.635336 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:23:49.734943 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:23:49.751446 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:23:49.751446 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:23:49.751446 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:23:49.751446 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:23:49.751446 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:23:49.751446 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:23:49.751446 ignition[958]: INFO : files: files passed Jan 24 00:23:49.751446 ignition[958]: INFO : Ignition finished successfully Jan 24 00:23:49.760354 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:23:49.807783 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:23:49.815649 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:23:49.824565 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:23:49.824809 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:23:49.850295 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:23:49.864588 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:23:49.864588 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:23:49.856805 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:23:49.893076 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:23:49.865638 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:23:49.912297 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:23:49.981625 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:23:49.981975 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:23:49.991969 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:23:50.009603 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:23:50.020255 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:23:50.038072 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:23:50.064068 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:23:50.079249 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:23:50.105128 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:23:50.121005 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:23:50.133278 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:23:50.141576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:23:50.147477 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:23:50.159558 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:23:50.170943 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:23:50.180644 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:23:50.187088 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:23:50.206307 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:23:50.218440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:23:50.230249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:23:50.242762 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:23:50.253001 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:23:50.263378 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:23:50.273404 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:23:50.280999 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:23:50.293373 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:23:50.305289 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:23:50.318761 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:23:50.324376 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:23:50.331396 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:23:50.331615 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:23:50.367434 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:23:50.367975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:23:50.383037 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:23:50.402044 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:23:50.409844 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:23:50.423492 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:23:50.461817 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:23:50.475459 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:23:50.483322 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:23:50.516075 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:23:50.522671 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:23:50.540404 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:23:50.550465 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:23:50.568381 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:23:50.576459 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:23:50.606352 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:23:50.619225 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:23:50.623766 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:23:50.644362 ignition[1013]: INFO : Ignition 2.19.0 Jan 24 00:23:50.644362 ignition[1013]: INFO : Stage: umount Jan 24 00:23:50.657002 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:23:50.657002 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:23:50.657002 ignition[1013]: INFO : umount: umount passed Jan 24 00:23:50.657002 ignition[1013]: INFO : Ignition finished successfully Jan 24 00:23:50.645109 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:23:50.649556 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:23:50.649929 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:23:50.657282 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:23:50.657638 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:23:50.665922 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:23:50.666175 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:23:50.679449 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:23:50.679748 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:23:50.686480 systemd[1]: Stopped target network.target - Network. Jan 24 00:23:50.700087 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:23:50.700287 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:23:50.712604 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:23:50.712780 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:23:50.713544 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:23:50.713623 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:23:50.727898 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:23:50.728067 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:23:50.729531 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:23:50.731660 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:23:50.833526 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:23:50.833857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:23:50.845607 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:23:50.845781 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:23:50.867531 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 24 00:23:50.886126 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:23:50.886428 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:23:50.897943 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:23:50.898031 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:23:50.912061 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:23:50.924630 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:23:50.924878 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:23:50.933038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:23:50.933186 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:23:50.935810 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:23:50.935926 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:23:50.936863 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:23:50.939558 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:23:50.972455 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:23:50.972815 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:23:50.982451 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:23:50.982605 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:23:50.991343 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:23:50.991506 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:23:50.999234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:23:50.999388 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:23:51.003073 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:23:51.003356 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:23:51.012291 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:23:51.012496 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:23:51.021643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:23:51.022323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:23:51.057640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:23:51.074239 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:23:51.074603 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:23:51.093236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:23:51.093433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:23:51.106111 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:23:51.106431 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:23:51.114440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:23:51.114670 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:23:51.131471 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:23:51.135490 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:23:51.135631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:23:51.173503 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:23:51.233436 systemd[1]: Switching root. Jan 24 00:23:51.804777 kernel: hrtimer: interrupt took 16107114 ns Jan 24 00:23:51.805450 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 24 00:23:51.805939 systemd-journald[194]: Journal stopped Jan 24 00:23:54.810618 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:23:54.810808 kernel: SELinux: policy capability open_perms=1 Jan 24 00:23:54.810823 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:23:54.810854 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:23:54.810890 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:23:54.810902 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:23:54.810954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:23:54.810975 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:23:54.810991 kernel: audit: type=1403 audit(1769214232.383:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:23:54.811099 systemd[1]: Successfully loaded SELinux policy in 102.662ms. Jan 24 00:23:54.811120 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 94.805ms. Jan 24 00:23:54.811160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:23:54.811220 systemd[1]: Detected virtualization kvm. Jan 24 00:23:54.811259 systemd[1]: Detected architecture x86-64. Jan 24 00:23:54.811276 systemd[1]: Detected first boot. Jan 24 00:23:54.811306 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:23:54.811323 zram_generator::config[1059]: No configuration found. Jan 24 00:23:54.811343 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:23:54.811385 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:23:54.811402 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:23:54.811418 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:23:54.811436 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:23:54.811488 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:23:54.811528 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:23:54.811563 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:23:54.811580 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:23:54.811597 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:23:54.811614 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:23:54.811656 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:23:54.811673 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:23:54.811751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:23:54.811770 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:23:54.811793 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:23:54.811810 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:23:54.811827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:23:54.811876 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:23:54.811913 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:23:54.811930 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:23:54.811946 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:23:54.811963 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:23:54.811980 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:23:54.812002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:23:54.812019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:23:54.812035 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:23:54.812052 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:23:54.812069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:23:54.812088 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:23:54.812105 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:23:54.812121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:23:54.812204 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:23:54.812226 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:23:54.812243 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:23:54.812260 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:23:54.812277 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:23:54.812293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:23:54.812309 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:23:54.812326 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:23:54.812373 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:23:54.812392 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:23:54.812410 systemd[1]: Reached target machines.target - Containers. Jan 24 00:23:54.812455 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:23:54.812473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:23:54.812492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:23:54.812512 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:23:54.812533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:23:54.812550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:23:54.812598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:23:54.812616 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:23:54.812632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:23:54.812649 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:23:54.812736 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:23:54.812777 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:23:54.812794 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:23:54.812810 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:23:54.812831 kernel: fuse: init (API version 7.39) Jan 24 00:23:54.812848 kernel: loop: module loaded Jan 24 00:23:54.812864 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:23:54.812880 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:23:54.812897 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:23:54.812913 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:23:54.812930 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:23:54.812968 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:23:54.812984 kernel: ACPI: bus type drm_connector registered Jan 24 00:23:54.813004 systemd[1]: Stopped verity-setup.service. Jan 24 00:23:54.813021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:23:54.813038 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:23:54.813054 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:23:54.813133 systemd-journald[1136]: Collecting audit messages is disabled. Jan 24 00:23:54.813168 systemd-journald[1136]: Journal started Jan 24 00:23:54.813262 systemd-journald[1136]: Runtime Journal (/run/log/journal/a24e68f718cd4a35bc03debc9f3fcfbe) is 6.0M, max 48.3M, 42.2M free. Jan 24 00:23:53.808279 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:23:53.852900 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:23:53.854808 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:23:53.855882 systemd[1]: systemd-journald.service: Consumed 2.111s CPU time. Jan 24 00:23:54.821956 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:23:54.826834 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:23:54.833918 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:23:54.839639 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:23:54.845570 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:23:54.850813 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:23:54.856955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:23:54.864808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:23:54.865229 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:23:54.872480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:23:54.872939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:23:54.881900 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:23:54.882496 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:23:54.889101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:23:54.889531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:23:54.897381 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:23:54.900537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:23:54.906946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:23:54.907362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:23:54.913618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:23:54.921117 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:23:54.941253 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:23:55.010364 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:23:55.028085 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:23:55.038240 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:23:55.042324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:23:55.042398 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:23:55.048469 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:23:55.055858 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:23:55.062888 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:23:55.066963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:23:55.069832 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:23:55.079487 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:23:55.084654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:23:55.090942 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:23:55.097353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:23:55.099938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:23:55.113631 systemd-journald[1136]: Time spent on flushing to /var/log/journal/a24e68f718cd4a35bc03debc9f3fcfbe is 61.599ms for 981 entries. Jan 24 00:23:55.113631 systemd-journald[1136]: System Journal (/var/log/journal/a24e68f718cd4a35bc03debc9f3fcfbe) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:23:55.534012 systemd-journald[1136]: Received client request to flush runtime journal. Jan 24 00:23:55.535019 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:23:55.119161 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:23:55.122533 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:23:55.129799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:23:55.135590 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:23:55.350131 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:23:55.359396 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:23:55.366163 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:23:55.391641 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:23:55.412062 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:23:55.425306 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:23:55.539307 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:23:55.564756 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:23:55.594110 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:23:55.619224 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:23:55.622082 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:23:55.623403 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:23:55.630269 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:23:55.650088 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:23:55.708554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:23:56.664953 kernel: loop2: detected capacity change from 0 to 229808 Jan 24 00:23:56.707749 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 24 00:23:56.707794 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 24 00:23:56.722412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:23:56.752801 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:23:56.808941 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 00:23:56.843568 kernel: loop5: detected capacity change from 0 to 229808 Jan 24 00:23:56.861934 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:23:56.863336 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 24 00:23:56.872513 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:23:56.872611 systemd[1]: Reloading... Jan 24 00:23:57.042644 zram_generator::config[1222]: No configuration found. Jan 24 00:23:57.480908 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:23:57.615248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:23:57.663355 systemd[1]: Reloading finished in 789 ms. Jan 24 00:23:57.748533 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:23:57.754412 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:23:57.767328 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:23:57.788263 systemd[1]: Starting ensure-sysext.service... Jan 24 00:23:57.794162 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:23:57.802369 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:23:57.809975 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:23:57.810025 systemd[1]: Reloading... Jan 24 00:23:57.849348 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:23:57.850115 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:23:57.852871 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:23:57.853587 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 24 00:23:57.853993 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 24 00:23:57.865306 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:23:57.865340 systemd-tmpfiles[1262]: Skipping /boot Jan 24 00:23:57.876610 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Jan 24 00:23:57.917189 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:23:57.917281 systemd-tmpfiles[1262]: Skipping /boot Jan 24 00:23:57.919803 zram_generator::config[1292]: No configuration found. Jan 24 00:23:58.514804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1326) Jan 24 00:23:58.669303 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:23:58.827436 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:23:58.827932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:23:58.834959 systemd[1]: Reloading finished in 1024 ms. Jan 24 00:23:58.856969 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:23:58.859866 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:23:58.860455 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:23:58.860621 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:23:58.865424 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:23:58.909371 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 00:23:58.907659 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:23:58.919376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:23:58.934767 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:23:59.114254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:23:59.133342 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:23:59.146196 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:23:59.154512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:23:59.159171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:23:59.166314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:23:59.176380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:23:59.186876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:23:59.190765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:23:59.230378 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:23:59.242154 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:23:59.253426 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:23:59.267251 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:23:59.272324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:23:59.281198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:23:59.281639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:23:59.286908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:23:59.287280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:23:59.316018 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:23:59.316617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:23:59.322945 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:23:59.330490 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:23:59.401336 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:23:59.493939 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:23:59.568862 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:23:59.573650 augenrules[1388]: No rules Jan 24 00:23:59.579287 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:23:59.608189 kernel: kvm_amd: TSC scaling supported Jan 24 00:23:59.608326 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:23:59.608364 kernel: kvm_amd: Nested Paging enabled Jan 24 00:23:59.609482 systemd[1]: Finished ensure-sysext.service. Jan 24 00:23:59.609636 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:23:59.611781 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:23:59.645819 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:23:59.646503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:23:59.669140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:23:59.673958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:23:59.681869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:23:59.690079 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:23:59.690938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:23:59.694303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:23:59.697832 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:23:59.703029 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:23:59.708632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:23:59.714197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:23:59.718377 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:23:59.718457 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:23:59.720026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:23:59.720435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:23:59.725935 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:23:59.726321 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:23:59.731913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:23:59.732311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:23:59.738894 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:23:59.739280 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:23:59.744364 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:23:59.750998 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:23:59.791614 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:23:59.795929 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:23:59.796117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:23:59.820858 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:23:59.827936 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:23:59.846070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:23:59.871463 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:23:59.890454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:23:59.905065 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:23:59.917394 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:23:59.956365 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:23:59.956550 systemd-networkd[1371]: lo: Link UP Jan 24 00:23:59.956559 systemd-networkd[1371]: lo: Gained carrier Jan 24 00:23:59.959974 systemd-resolved[1372]: Positive Trust Anchors: Jan 24 00:23:59.959994 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:23:59.960040 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:23:59.960455 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:23:59.961809 systemd-networkd[1371]: Enumeration completed Jan 24 00:23:59.963122 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:23:59.963163 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:23:59.963780 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:23:59.967167 systemd-networkd[1371]: eth0: Link UP Jan 24 00:23:59.967181 systemd-networkd[1371]: eth0: Gained carrier Jan 24 00:23:59.967208 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:23:59.972407 systemd-resolved[1372]: Defaulting to hostname 'linux'. Jan 24 00:23:59.983662 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:23:59.996054 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:24:00.000358 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:24:00.005851 systemd[1]: Reached target network.target - Network. Jan 24 00:24:00.006908 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:24:00.008887 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:24:00.012824 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:24:00.012859 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Jan 24 00:24:00.016507 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:24:01.290009 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:24:01.290038 systemd-resolved[1372]: Clock change detected. Flushing caches. Jan 24 00:24:01.290170 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:24:01.290249 systemd-timesyncd[1403]: Initial clock synchronization to Sat 2026-01-24 00:24:01.289877 UTC. Jan 24 00:24:01.294387 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:24:01.298231 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:24:01.302672 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:24:01.306753 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:24:01.306815 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:24:01.309911 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:24:01.314758 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:24:01.320293 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:24:01.332448 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:24:01.337230 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:24:01.341062 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:24:01.345954 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:24:01.352536 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:24:01.352806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:24:01.355798 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:24:01.362243 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:24:01.375187 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:24:01.387227 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:24:01.390829 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:24:01.394178 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:24:01.399488 jq[1436]: false Jan 24 00:24:01.400227 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:24:01.405545 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:24:01.411967 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:24:01.423008 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:24:01.427470 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:24:01.428331 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:24:01.429703 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:24:01.438018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:24:01.441379 dbus-daemon[1435]: [system] SELinux support is enabled Jan 24 00:24:01.442484 extend-filesystems[1437]: Found loop3 Jan 24 00:24:01.442484 extend-filesystems[1437]: Found loop4 Jan 24 00:24:01.442484 extend-filesystems[1437]: Found loop5 Jan 24 00:24:01.442484 extend-filesystems[1437]: Found sr0 Jan 24 00:24:01.442484 extend-filesystems[1437]: Found vda Jan 24 00:24:01.442484 extend-filesystems[1437]: Found vda1 Jan 24 00:24:01.442297 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:24:01.492033 extend-filesystems[1437]: Found vda2 Jan 24 00:24:01.492033 extend-filesystems[1437]: Found vda3 Jan 24 00:24:01.492033 extend-filesystems[1437]: Found usr Jan 24 00:24:01.492033 extend-filesystems[1437]: Found vda4 Jan 24 00:24:01.492033 extend-filesystems[1437]: Found vda6 Jan 24 00:24:01.492033 extend-filesystems[1437]: Found vda7 Jan 24 00:24:01.492033 extend-filesystems[1437]: Found vda9 Jan 24 00:24:01.492033 extend-filesystems[1437]: Checking size of /dev/vda9 Jan 24 00:24:01.475907 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:24:01.558875 update_engine[1449]: I20260124 00:24:01.510493 1449 main.cc:92] Flatcar Update Engine starting Jan 24 00:24:01.558875 update_engine[1449]: I20260124 00:24:01.533864 1449 update_check_scheduler.cc:74] Next update check in 4m34s Jan 24 00:24:01.476245 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:24:01.497167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:24:01.497788 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:24:01.540776 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:24:01.541708 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:24:01.562756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:24:01.562865 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:24:01.572835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:24:01.572898 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:24:01.580632 jq[1450]: true Jan 24 00:24:01.588978 extend-filesystems[1437]: Resized partition /dev/vda9 Jan 24 00:24:01.594448 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:24:01.600406 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:24:01.612452 tar[1457]: linux-amd64/LICENSE Jan 24 00:24:01.612452 tar[1457]: linux-amd64/helm Jan 24 00:24:01.618499 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:24:01.617928 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:24:01.626362 jq[1467]: true Jan 24 00:24:01.630319 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:24:01.630399 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:24:01.630840 systemd-logind[1443]: New seat seat0. Jan 24 00:24:01.641344 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:24:01.979196 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:24:01.989733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:24:01.996641 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:24:02.134785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1316) Jan 24 00:24:02.196084 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:24:02.196084 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:24:02.196084 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:24:02.215761 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jan 24 00:24:02.199180 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:24:02.199664 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:24:02.379377 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:24:02.388999 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:24:02.551182 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:24:02.555180 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:24:02.560195 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:24:02.577732 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:24:02.589033 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 24 00:24:02.593432 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:24:02.606022 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:37120.service - OpenSSH per-connection server daemon (10.0.0.1:37120). Jan 24 00:24:02.634245 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:24:02.699829 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:24:02.725992 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:24:02.731742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:24:02.744789 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:24:02.766363 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:24:02.766814 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:24:02.848947 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:24:02.963367 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:24:02.983430 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:24:03.005862 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:24:03.006222 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:24:03.029279 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:24:03.034512 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:24:03.037259 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:24:03.042925 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:24:03.602085 sshd[1507]: Accepted publickey for core from 10.0.0.1 port 37120 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:03.630391 sshd[1507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:03.683213 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:24:03.762427 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:24:03.790995 containerd[1460]: time="2026-01-24T00:24:03.790723359Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:24:03.887457 systemd-logind[1443]: New session 1 of user core. Jan 24 00:24:03.901483 containerd[1460]: time="2026-01-24T00:24:03.901387905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:24:03.906113 containerd[1460]: time="2026-01-24T00:24:03.905306650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:24:03.906113 containerd[1460]: time="2026-01-24T00:24:03.905388593Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:24:03.906113 containerd[1460]: time="2026-01-24T00:24:03.905463723Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:24:03.906113 containerd[1460]: time="2026-01-24T00:24:03.905818115Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:24:03.906113 containerd[1460]: time="2026-01-24T00:24:03.905845496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:24:03.906113 containerd[1460]: time="2026-01-24T00:24:03.905952005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:24:03.906113 containerd[1460]: time="2026-01-24T00:24:03.905966982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:24:03.907624 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:24:03.908723 containerd[1460]: time="2026-01-24T00:24:03.908430922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:24:03.908723 containerd[1460]: time="2026-01-24T00:24:03.908458283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:24:03.908723 containerd[1460]: time="2026-01-24T00:24:03.908478510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:24:03.908723 containerd[1460]: time="2026-01-24T00:24:03.908490994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:24:03.908881 containerd[1460]: time="2026-01-24T00:24:03.908726704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:24:03.909298 containerd[1460]: time="2026-01-24T00:24:03.909220115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:24:03.909432 containerd[1460]: time="2026-01-24T00:24:03.909397616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:24:03.909888 containerd[1460]: time="2026-01-24T00:24:03.909730206Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:24:03.912744 containerd[1460]: time="2026-01-24T00:24:03.912719246Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:24:03.913111 containerd[1460]: time="2026-01-24T00:24:03.912862263Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:24:03.921825 containerd[1460]: time="2026-01-24T00:24:03.921744787Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:24:03.922530 containerd[1460]: time="2026-01-24T00:24:03.921961542Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:24:03.922530 containerd[1460]: time="2026-01-24T00:24:03.922214774Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:24:03.922530 containerd[1460]: time="2026-01-24T00:24:03.922236996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:24:03.922530 containerd[1460]: time="2026-01-24T00:24:03.922324148Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:24:03.922750 containerd[1460]: time="2026-01-24T00:24:03.922688228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:24:03.923284 containerd[1460]: time="2026-01-24T00:24:03.923230631Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:24:03.923628 containerd[1460]: time="2026-01-24T00:24:03.923526743Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:24:03.923661 containerd[1460]: time="2026-01-24T00:24:03.923629846Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:24:03.923661 containerd[1460]: time="2026-01-24T00:24:03.923647990Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:24:03.923697 containerd[1460]: time="2026-01-24T00:24:03.923661976Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.923697 containerd[1460]: time="2026-01-24T00:24:03.923674389Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.923775 containerd[1460]: time="2026-01-24T00:24:03.923709425Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.923831 containerd[1460]: time="2026-01-24T00:24:03.923800675Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.923831 containerd[1460]: time="2026-01-24T00:24:03.923815242Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.923831 containerd[1460]: time="2026-01-24T00:24:03.923829789Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.923889 containerd[1460]: time="2026-01-24T00:24:03.923841561Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.923889 containerd[1460]: time="2026-01-24T00:24:03.923852551Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924039040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924063475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924075968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924152802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924174623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924187076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924197836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924234414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924281142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924296580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924308904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924341134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924354097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924398019Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:24:03.924743 containerd[1460]: time="2026-01-24T00:24:03.924469443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.924482367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.924492456Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.924677130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.924698921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.924847068Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.924898494Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.924909644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.925050889Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.925109978Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:24:03.925651 containerd[1460]: time="2026-01-24T00:24:03.925230864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:24:03.926918 containerd[1460]: time="2026-01-24T00:24:03.926297355Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:24:03.926918 containerd[1460]: time="2026-01-24T00:24:03.926392863Z" level=info msg="Connect containerd service" Jan 24 00:24:03.926918 containerd[1460]: time="2026-01-24T00:24:03.926497839Z" level=info msg="using legacy CRI server" Jan 24 00:24:03.926918 containerd[1460]: time="2026-01-24T00:24:03.926540479Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:24:03.926918 containerd[1460]: time="2026-01-24T00:24:03.926903656Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:24:03.928478 containerd[1460]: time="2026-01-24T00:24:03.928422481Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:24:03.928773 containerd[1460]: time="2026-01-24T00:24:03.928695180Z" level=info msg="Start subscribing containerd event" Jan 24 00:24:03.929003 containerd[1460]: time="2026-01-24T00:24:03.928947862Z" level=info msg="Start recovering state" Jan 24 00:24:03.929290 containerd[1460]: time="2026-01-24T00:24:03.929256718Z" level=info msg="Start event monitor" Jan 24 00:24:03.929378 containerd[1460]: time="2026-01-24T00:24:03.929347588Z" level=info msg="Start snapshots syncer" Jan 24 00:24:03.929931 containerd[1460]: time="2026-01-24T00:24:03.929882797Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:24:03.930106 containerd[1460]: time="2026-01-24T00:24:03.930061510Z" level=info msg="Start streaming server" Jan 24 00:24:03.930952 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:24:03.931474 containerd[1460]: time="2026-01-24T00:24:03.931381875Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:24:03.931773 containerd[1460]: time="2026-01-24T00:24:03.931716830Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:24:03.933444 containerd[1460]: time="2026-01-24T00:24:03.933422038Z" level=info msg="containerd successfully booted in 0.145658s" Jan 24 00:24:03.935034 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:24:03.946112 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:24:04.205310 systemd[1544]: Queued start job for default target default.target. Jan 24 00:24:04.206443 tar[1457]: linux-amd64/README.md Jan 24 00:24:04.219997 systemd[1544]: Created slice app.slice - User Application Slice. Jan 24 00:24:04.220074 systemd[1544]: Reached target paths.target - Paths. Jan 24 00:24:04.220096 systemd[1544]: Reached target timers.target - Timers. Jan 24 00:24:04.223669 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:24:04.315007 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:24:04.318619 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:24:04.318902 systemd[1544]: Reached target sockets.target - Sockets. Jan 24 00:24:04.318959 systemd[1544]: Reached target basic.target - Basic System. Jan 24 00:24:04.319039 systemd[1544]: Reached target default.target - Main User Target. Jan 24 00:24:04.319172 systemd[1544]: Startup finished in 347ms. Jan 24 00:24:04.319479 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:24:04.331960 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:24:04.410316 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:33672.service - OpenSSH per-connection server daemon (10.0.0.1:33672). Jan 24 00:24:04.481182 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 33672 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:04.486976 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:04.496193 systemd-logind[1443]: New session 2 of user core. Jan 24 00:24:04.506954 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:24:04.764784 sshd[1558]: pam_unix(sshd:session): session closed for user core Jan 24 00:24:04.780675 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:33672.service: Deactivated successfully. Jan 24 00:24:04.782894 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:24:04.785512 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:24:04.794298 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:33674.service - OpenSSH per-connection server daemon (10.0.0.1:33674). Jan 24 00:24:04.799766 systemd-logind[1443]: Removed session 2. Jan 24 00:24:04.853959 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 33674 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:04.857226 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:04.864986 systemd-logind[1443]: New session 3 of user core. Jan 24 00:24:04.883916 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:24:05.078631 sshd[1565]: pam_unix(sshd:session): session closed for user core Jan 24 00:24:05.084851 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:33674.service: Deactivated successfully. Jan 24 00:24:05.088350 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:24:05.091310 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:24:05.093233 systemd-logind[1443]: Removed session 3. Jan 24 00:24:07.435114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:24:07.442433 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:24:07.445464 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:24:07.461143 systemd[1]: Startup finished in 4.891s (kernel) + 11.252s (initrd) + 13.910s (userspace) = 30.054s. Jan 24 00:24:10.824662 kubelet[1576]: E0124 00:24:10.824300 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:24:10.836310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:24:10.836782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:24:10.837671 systemd[1]: kubelet.service: Consumed 7.339s CPU time. Jan 24 00:24:15.130485 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:37158.service - OpenSSH per-connection server daemon (10.0.0.1:37158). Jan 24 00:24:15.203070 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 37158 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:15.206902 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:15.225709 systemd-logind[1443]: New session 4 of user core. Jan 24 00:24:15.238611 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:24:15.330177 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 24 00:24:15.349887 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:37158.service: Deactivated successfully. Jan 24 00:24:15.356930 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:24:15.366955 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:24:15.385486 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:37160.service - OpenSSH per-connection server daemon (10.0.0.1:37160). Jan 24 00:24:15.389654 systemd-logind[1443]: Removed session 4. Jan 24 00:24:15.463304 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 37160 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:15.467114 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:15.487319 systemd-logind[1443]: New session 5 of user core. Jan 24 00:24:15.503163 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:24:15.597733 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 24 00:24:15.617475 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:37160.service: Deactivated successfully. Jan 24 00:24:15.621848 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:24:15.624987 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:24:15.644720 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:37174.service - OpenSSH per-connection server daemon (10.0.0.1:37174). Jan 24 00:24:15.649402 systemd-logind[1443]: Removed session 5. Jan 24 00:24:15.692256 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 37174 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:15.695213 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:15.712115 systemd-logind[1443]: New session 6 of user core. Jan 24 00:24:15.724455 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:24:15.825620 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 24 00:24:15.856445 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:37174.service: Deactivated successfully. Jan 24 00:24:15.863511 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:24:15.877037 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:24:15.895460 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Jan 24 00:24:15.897326 systemd-logind[1443]: Removed session 6. Jan 24 00:24:15.974481 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:15.981384 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:15.996037 systemd-logind[1443]: New session 7 of user core. Jan 24 00:24:16.006043 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:24:16.116502 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:24:16.117168 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:24:16.168320 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 24 00:24:16.175515 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 24 00:24:16.195481 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:37182.service: Deactivated successfully. Jan 24 00:24:16.199165 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:24:16.201163 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:24:16.218736 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:37188.service - OpenSSH per-connection server daemon (10.0.0.1:37188). Jan 24 00:24:16.224281 systemd-logind[1443]: Removed session 7. Jan 24 00:24:16.264984 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 37188 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:16.269138 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:16.286354 systemd-logind[1443]: New session 8 of user core. Jan 24 00:24:16.297062 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:24:16.380446 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:24:16.381965 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:24:16.403926 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 24 00:24:16.416368 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:24:16.419293 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:24:16.468145 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:24:16.474321 auditctl[1626]: No rules Jan 24 00:24:16.477111 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:24:16.477638 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:24:16.481076 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:24:16.660892 augenrules[1644]: No rules Jan 24 00:24:16.664362 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:24:16.668831 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 24 00:24:16.674625 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 24 00:24:16.690739 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:37188.service: Deactivated successfully. Jan 24 00:24:16.694273 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:24:16.698553 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:24:16.717331 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:37194.service - OpenSSH per-connection server daemon (10.0.0.1:37194). Jan 24 00:24:16.720072 systemd-logind[1443]: Removed session 8. Jan 24 00:24:16.774922 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 37194 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:24:16.783915 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:24:16.803097 systemd-logind[1443]: New session 9 of user core. Jan 24 00:24:16.817790 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:24:16.914314 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:24:16.915311 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:24:19.887080 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:24:19.893163 (dockerd)[1674]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:24:21.086978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:24:21.199827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:24:22.416990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:24:22.417339 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:24:22.614792 dockerd[1674]: time="2026-01-24T00:24:22.614634860Z" level=info msg="Starting up" Jan 24 00:24:22.931705 kubelet[1690]: E0124 00:24:22.931513 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:24:22.938436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:24:22.938804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:24:22.939246 systemd[1]: kubelet.service: Consumed 1.729s CPU time. Jan 24 00:24:23.194406 dockerd[1674]: time="2026-01-24T00:24:23.193975096Z" level=info msg="Loading containers: start." Jan 24 00:24:23.529684 kernel: Initializing XFRM netlink socket Jan 24 00:24:23.672163 systemd-networkd[1371]: docker0: Link UP Jan 24 00:24:23.699523 dockerd[1674]: time="2026-01-24T00:24:23.699418427Z" level=info msg="Loading containers: done." Jan 24 00:24:23.796380 dockerd[1674]: time="2026-01-24T00:24:23.796205184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:24:23.796380 dockerd[1674]: time="2026-01-24T00:24:23.796381523Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:24:23.797098 dockerd[1674]: time="2026-01-24T00:24:23.796501217Z" level=info msg="Daemon has completed initialization" Jan 24 00:24:23.847448 dockerd[1674]: time="2026-01-24T00:24:23.847163820Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:24:23.847670 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:24:25.512897 containerd[1460]: time="2026-01-24T00:24:25.512494375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 00:24:26.580741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034105924.mount: Deactivated successfully. Jan 24 00:24:30.107115 containerd[1460]: time="2026-01-24T00:24:30.106820692Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 24 00:24:30.107115 containerd[1460]: time="2026-01-24T00:24:30.106918895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:30.109221 containerd[1460]: time="2026-01-24T00:24:30.108922335Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:30.115408 containerd[1460]: time="2026-01-24T00:24:30.115256057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:30.117419 containerd[1460]: time="2026-01-24T00:24:30.117241984Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 4.604473638s" Jan 24 00:24:30.117703 containerd[1460]: time="2026-01-24T00:24:30.117504814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 00:24:30.123815 containerd[1460]: time="2026-01-24T00:24:30.123773080Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 00:24:32.902671 containerd[1460]: time="2026-01-24T00:24:32.902177763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:32.904989 containerd[1460]: time="2026-01-24T00:24:32.902911285Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 24 00:24:32.907186 containerd[1460]: time="2026-01-24T00:24:32.907075077Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:32.912773 containerd[1460]: time="2026-01-24T00:24:32.912712149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:32.915293 containerd[1460]: time="2026-01-24T00:24:32.915216700Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.791265097s" Jan 24 00:24:32.915543 containerd[1460]: time="2026-01-24T00:24:32.915416293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 00:24:32.919436 containerd[1460]: time="2026-01-24T00:24:32.919195838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 00:24:33.075045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:24:33.098801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:24:33.372857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:24:33.381339 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:24:33.533728 kubelet[1910]: E0124 00:24:33.533644 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:24:33.538444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:24:33.538744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:24:34.679895 containerd[1460]: time="2026-01-24T00:24:34.679440972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:34.682175 containerd[1460]: time="2026-01-24T00:24:34.679887119Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 24 00:24:34.682175 containerd[1460]: time="2026-01-24T00:24:34.681978399Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:34.687747 containerd[1460]: time="2026-01-24T00:24:34.687680122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:34.689703 containerd[1460]: time="2026-01-24T00:24:34.689644833Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.770412508s" Jan 24 00:24:34.689829 containerd[1460]: time="2026-01-24T00:24:34.689788746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 00:24:34.693064 containerd[1460]: time="2026-01-24T00:24:34.693019063Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 00:24:36.475768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1938749193.mount: Deactivated successfully. Jan 24 00:24:37.469644 containerd[1460]: time="2026-01-24T00:24:37.469368728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:37.471525 containerd[1460]: time="2026-01-24T00:24:37.469782446Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 24 00:24:37.473947 containerd[1460]: time="2026-01-24T00:24:37.473852886Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:37.477378 containerd[1460]: time="2026-01-24T00:24:37.477280342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:37.478250 containerd[1460]: time="2026-01-24T00:24:37.478195285Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.785127276s" Jan 24 00:24:37.478381 containerd[1460]: time="2026-01-24T00:24:37.478322461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 00:24:37.481603 containerd[1460]: time="2026-01-24T00:24:37.481439266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 00:24:38.410930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653355377.mount: Deactivated successfully. Jan 24 00:24:41.023261 containerd[1460]: time="2026-01-24T00:24:41.022532044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:41.023261 containerd[1460]: time="2026-01-24T00:24:41.022746151Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 24 00:24:41.026200 containerd[1460]: time="2026-01-24T00:24:41.025547951Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:41.044069 containerd[1460]: time="2026-01-24T00:24:41.043620927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:41.046114 containerd[1460]: time="2026-01-24T00:24:41.046033840Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.564493185s" Jan 24 00:24:41.046494 containerd[1460]: time="2026-01-24T00:24:41.046325782Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 00:24:41.050696 containerd[1460]: time="2026-01-24T00:24:41.050643686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:24:41.780640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565995931.mount: Deactivated successfully. Jan 24 00:24:41.788550 containerd[1460]: time="2026-01-24T00:24:41.788434075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:41.789927 containerd[1460]: time="2026-01-24T00:24:41.789645231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:24:41.791479 containerd[1460]: time="2026-01-24T00:24:41.791357565Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:41.796612 containerd[1460]: time="2026-01-24T00:24:41.796421624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:41.798042 containerd[1460]: time="2026-01-24T00:24:41.797942137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 747.229507ms" Jan 24 00:24:41.798226 containerd[1460]: time="2026-01-24T00:24:41.798110089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:24:41.801104 containerd[1460]: time="2026-01-24T00:24:41.801045946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 00:24:42.395669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366674666.mount: Deactivated successfully. Jan 24 00:24:43.562653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:24:43.578240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:24:44.037159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:24:44.065822 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:24:44.301362 kubelet[2044]: E0124 00:24:44.301167 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:24:44.311463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:24:44.311920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:24:46.408256 update_engine[1449]: I20260124 00:24:46.407678 1449 update_attempter.cc:509] Updating boot flags... Jan 24 00:24:46.555321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2068) Jan 24 00:24:46.622674 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2071) Jan 24 00:24:47.872973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2071) Jan 24 00:24:48.194929 containerd[1460]: time="2026-01-24T00:24:48.193773486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:48.196507 containerd[1460]: time="2026-01-24T00:24:48.195013256Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 24 00:24:48.197187 containerd[1460]: time="2026-01-24T00:24:48.197144227Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:48.201215 containerd[1460]: time="2026-01-24T00:24:48.201180921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:24:48.203659 containerd[1460]: time="2026-01-24T00:24:48.203135105Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 6.402026008s" Jan 24 00:24:48.203659 containerd[1460]: time="2026-01-24T00:24:48.203268829Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 00:24:54.441096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:24:54.452983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:24:54.472762 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:24:54.472940 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:24:54.473448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:24:54.478302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:24:54.513285 systemd[1]: Reloading requested from client PID 2109 ('systemctl') (unit session-9.scope)... Jan 24 00:24:54.513331 systemd[1]: Reloading... Jan 24 00:24:54.612834 zram_generator::config[2146]: No configuration found. Jan 24 00:24:54.763041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:24:54.864477 systemd[1]: Reloading finished in 350 ms. Jan 24 00:24:54.925272 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:24:54.925410 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:24:54.925817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:24:54.936098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:24:55.273160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:24:55.291350 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:24:55.364139 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:24:55.364139 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:24:55.364139 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:24:55.364911 kubelet[2196]: I0124 00:24:55.364239 2196 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:24:56.701263 kubelet[2196]: I0124 00:24:56.695888 2196 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:24:56.701263 kubelet[2196]: I0124 00:24:56.701280 2196 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:24:56.705377 kubelet[2196]: I0124 00:24:56.703851 2196 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:24:56.829609 kubelet[2196]: E0124 00:24:56.829326 2196 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:24:56.829609 kubelet[2196]: I0124 00:24:56.829902 2196 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:24:56.867848 kubelet[2196]: E0124 00:24:56.867742 2196 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:24:56.867965 kubelet[2196]: I0124 00:24:56.867856 2196 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:24:56.880550 kubelet[2196]: I0124 00:24:56.880070 2196 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:24:56.883319 kubelet[2196]: I0124 00:24:56.883188 2196 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:24:56.883797 kubelet[2196]: I0124 00:24:56.883357 2196 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:24:56.884017 kubelet[2196]: I0124 00:24:56.883870 2196 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:24:56.884017 kubelet[2196]: I0124 00:24:56.883906 2196 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:24:56.884549 kubelet[2196]: I0124 00:24:56.884485 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:24:56.889620 kubelet[2196]: I0124 00:24:56.889405 2196 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:24:56.889784 kubelet[2196]: I0124 00:24:56.889655 2196 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:24:56.889992 kubelet[2196]: I0124 00:24:56.889893 2196 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:24:56.889992 kubelet[2196]: I0124 00:24:56.889956 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:24:56.896858 kubelet[2196]: E0124 00:24:56.896734 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:24:56.896858 kubelet[2196]: E0124 00:24:56.896734 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:24:56.900639 kubelet[2196]: I0124 00:24:56.900539 2196 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:24:56.901799 kubelet[2196]: I0124 00:24:56.901711 2196 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:24:56.904505 kubelet[2196]: W0124 00:24:56.904408 2196 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:24:56.911836 kubelet[2196]: I0124 00:24:56.911760 2196 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:24:56.911932 kubelet[2196]: I0124 00:24:56.911895 2196 server.go:1289] "Started kubelet" Jan 24 00:24:56.913070 kubelet[2196]: I0124 00:24:56.912244 2196 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:24:56.917648 kubelet[2196]: I0124 00:24:56.917218 2196 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:24:56.917648 kubelet[2196]: I0124 00:24:56.917227 2196 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:24:56.917648 kubelet[2196]: I0124 00:24:56.917228 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:24:56.919207 kubelet[2196]: I0124 00:24:56.919091 2196 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:24:56.921669 kubelet[2196]: E0124 00:24:56.921542 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:24:56.921891 kubelet[2196]: I0124 00:24:56.921854 2196 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:24:56.923680 kubelet[2196]: I0124 00:24:56.923529 2196 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:24:56.923822 kubelet[2196]: I0124 00:24:56.923783 2196 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:24:56.924431 kubelet[2196]: E0124 00:24:56.924345 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:24:56.925635 kubelet[2196]: E0124 00:24:56.925508 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Jan 24 00:24:56.926373 kubelet[2196]: I0124 00:24:56.926307 2196 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:24:56.975060 kubelet[2196]: E0124 00:24:56.926762 2196 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d830782f8c450 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:24:56.911823952 +0000 UTC m=+1.611347228,LastTimestamp:2026-01-24 00:24:56.911823952 +0000 UTC m=+1.611347228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:24:56.975060 kubelet[2196]: I0124 00:24:56.928449 2196 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:24:57.170373 kubelet[2196]: E0124 00:24:57.169800 2196 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:24:57.174351 kubelet[2196]: E0124 00:24:57.171844 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:24:57.174351 kubelet[2196]: E0124 00:24:57.172486 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Jan 24 00:24:57.179779 kubelet[2196]: W0124 00:24:57.179735 2196 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Jan 24 00:24:57.198467 kubelet[2196]: I0124 00:24:57.198376 2196 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:24:57.201640 kubelet[2196]: I0124 00:24:57.201474 2196 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:24:57.201979 kubelet[2196]: I0124 00:24:57.201890 2196 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:24:57.202093 kubelet[2196]: I0124 00:24:57.202047 2196 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:24:57.202212 kubelet[2196]: I0124 00:24:57.202180 2196 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:24:57.202430 kubelet[2196]: E0124 00:24:57.202367 2196 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:24:57.205304 kubelet[2196]: E0124 00:24:57.205221 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:24:57.272908 kubelet[2196]: E0124 00:24:57.272668 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:24:57.282748 kubelet[2196]: I0124 00:24:57.282690 2196 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:24:57.282748 kubelet[2196]: I0124 00:24:57.282712 2196 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:24:57.302817 kubelet[2196]: E0124 00:24:57.302698 2196 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:24:57.304072 kubelet[2196]: I0124 00:24:57.303919 2196 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:24:57.304072 kubelet[2196]: I0124 00:24:57.303968 2196 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:24:57.304072 kubelet[2196]: I0124 00:24:57.304026 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:24:57.375388 kubelet[2196]: I0124 00:24:57.374861 2196 policy_none.go:49] "None policy: Start" Jan 24 00:24:57.375388 kubelet[2196]: E0124 00:24:57.374760 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:24:57.375388 kubelet[2196]: I0124 00:24:57.375361 2196 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:24:57.375388 kubelet[2196]: I0124 00:24:57.375542 2196 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:24:57.392318 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:24:57.409007 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:24:57.425882 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:24:57.428004 kubelet[2196]: E0124 00:24:57.427957 2196 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:24:57.428448 kubelet[2196]: I0124 00:24:57.428326 2196 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:24:57.428448 kubelet[2196]: I0124 00:24:57.428406 2196 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:24:57.429183 kubelet[2196]: I0124 00:24:57.428940 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:24:57.438860 kubelet[2196]: E0124 00:24:57.438686 2196 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:24:57.439074 kubelet[2196]: E0124 00:24:57.439047 2196 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:24:57.520867 systemd[1]: Created slice kubepods-burstable-podeaf9e2e0580393f127d8afc00cdbc1dc.slice - libcontainer container kubepods-burstable-podeaf9e2e0580393f127d8afc00cdbc1dc.slice. Jan 24 00:24:57.530903 kubelet[2196]: I0124 00:24:57.530711 2196 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:24:57.531474 kubelet[2196]: E0124 00:24:57.531269 2196 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 24 00:24:57.533061 kubelet[2196]: E0124 00:24:57.532980 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:24:57.537461 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 24 00:24:57.540785 kubelet[2196]: E0124 00:24:57.540731 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:24:57.557794 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 24 00:24:57.560529 kubelet[2196]: E0124 00:24:57.560469 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:24:57.574441 kubelet[2196]: I0124 00:24:57.574379 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eaf9e2e0580393f127d8afc00cdbc1dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf9e2e0580393f127d8afc00cdbc1dc\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:24:57.574636 kubelet[2196]: I0124 00:24:57.574481 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eaf9e2e0580393f127d8afc00cdbc1dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf9e2e0580393f127d8afc00cdbc1dc\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:24:57.574636 kubelet[2196]: E0124 00:24:57.574397 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Jan 24 00:24:57.574714 kubelet[2196]: I0124 00:24:57.574649 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:24:57.574714 kubelet[2196]: I0124 00:24:57.574688 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:24:57.574806 kubelet[2196]: I0124 00:24:57.574714 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:24:57.574914 kubelet[2196]: I0124 00:24:57.574853 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:24:57.574960 kubelet[2196]: I0124 00:24:57.574925 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:24:57.574960 kubelet[2196]: I0124 00:24:57.574957 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eaf9e2e0580393f127d8afc00cdbc1dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eaf9e2e0580393f127d8afc00cdbc1dc\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:24:57.575020 kubelet[2196]: I0124 00:24:57.574974 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:24:57.741089 kubelet[2196]: I0124 00:24:57.740867 2196 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:24:57.741089 kubelet[2196]: E0124 00:24:57.740880 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:24:57.741089 kubelet[2196]: E0124 00:24:57.741544 2196 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 24 00:24:57.819706 kubelet[2196]: E0124 00:24:57.819617 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:24:57.834624 kubelet[2196]: E0124 00:24:57.834500 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:24:57.836950 containerd[1460]: time="2026-01-24T00:24:57.836781177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eaf9e2e0580393f127d8afc00cdbc1dc,Namespace:kube-system,Attempt:0,}" Jan 24 00:24:57.841372 kubelet[2196]: E0124 00:24:57.841253 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:24:57.841980 containerd[1460]: time="2026-01-24T00:24:57.841933907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 24 00:24:57.861526 kubelet[2196]: E0124 00:24:57.861439 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:24:57.864470 containerd[1460]: time="2026-01-24T00:24:57.864373033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 24 00:24:58.146722 kubelet[2196]: I0124 00:24:58.146094 2196 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:24:58.147262 kubelet[2196]: E0124 00:24:58.147094 2196 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 24 00:24:58.370778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3230055938.mount: Deactivated successfully. Jan 24 00:24:58.375890 kubelet[2196]: E0124 00:24:58.375679 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Jan 24 00:24:58.401829 containerd[1460]: time="2026-01-24T00:24:58.401519905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:24:58.436989 kubelet[2196]: E0124 00:24:58.436703 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:24:58.439993 containerd[1460]: time="2026-01-24T00:24:58.439732128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:24:58.447279 containerd[1460]: time="2026-01-24T00:24:58.447071065Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:24:58.450381 containerd[1460]: time="2026-01-24T00:24:58.450227015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:24:58.453267 containerd[1460]: time="2026-01-24T00:24:58.453044750Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:24:58.463598 containerd[1460]: time="2026-01-24T00:24:58.463361319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:24:58.470809 containerd[1460]: time="2026-01-24T00:24:58.469233812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:24:58.485655 containerd[1460]: time="2026-01-24T00:24:58.479220476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:24:58.485655 containerd[1460]: time="2026-01-24T00:24:58.480497237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 638.491377ms" Jan 24 00:24:58.487077 containerd[1460]: time="2026-01-24T00:24:58.486890514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 649.776324ms" Jan 24 00:24:58.511798 containerd[1460]: time="2026-01-24T00:24:58.511305070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 646.827394ms" Jan 24 00:24:58.662239 kubelet[2196]: E0124 00:24:58.659823 2196 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d830782f8c450 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:24:56.911823952 +0000 UTC m=+1.611347228,LastTimestamp:2026-01-24 00:24:56.911823952 +0000 UTC m=+1.611347228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:24:58.778621 kubelet[2196]: E0124 00:24:58.778186 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:24:58.923692 containerd[1460]: time="2026-01-24T00:24:58.923392625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:24:58.924483 containerd[1460]: time="2026-01-24T00:24:58.923658745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:24:58.924483 containerd[1460]: time="2026-01-24T00:24:58.923692507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:24:58.924483 containerd[1460]: time="2026-01-24T00:24:58.923882688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:24:58.944795 kubelet[2196]: E0124 00:24:58.943802 2196 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:24:58.954739 kubelet[2196]: I0124 00:24:58.953402 2196 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:24:58.954739 kubelet[2196]: E0124 00:24:58.954061 2196 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 24 00:24:58.955202 containerd[1460]: time="2026-01-24T00:24:58.953783845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:24:58.955202 containerd[1460]: time="2026-01-24T00:24:58.953912663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:24:58.955202 containerd[1460]: time="2026-01-24T00:24:58.953935105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:24:58.955202 containerd[1460]: time="2026-01-24T00:24:58.954067969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:24:58.958907 containerd[1460]: time="2026-01-24T00:24:58.958370604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:24:58.958907 containerd[1460]: time="2026-01-24T00:24:58.958502737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:24:58.958907 containerd[1460]: time="2026-01-24T00:24:58.958528394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:24:58.958907 containerd[1460]: time="2026-01-24T00:24:58.958753899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:24:59.033373 systemd[1]: Started cri-containerd-67d2e11f15e2ac99ba1ce615797ab632d33ff4b2dbc36ecbcea11a13e2289781.scope - libcontainer container 67d2e11f15e2ac99ba1ce615797ab632d33ff4b2dbc36ecbcea11a13e2289781. Jan 24 00:24:59.037745 systemd[1]: Started cri-containerd-f2774316df6dac2dea3fc9e28a13f4a3b6a828aaa2a650e9c7f6d17dccf64a12.scope - libcontainer container f2774316df6dac2dea3fc9e28a13f4a3b6a828aaa2a650e9c7f6d17dccf64a12. Jan 24 00:24:59.064306 systemd[1]: Started cri-containerd-8cbfb4757a31285143d9c5dfb32f53decd7b7064b14ab46b49fed90e1ff953de.scope - libcontainer container 8cbfb4757a31285143d9c5dfb32f53decd7b7064b14ab46b49fed90e1ff953de. Jan 24 00:24:59.804638 containerd[1460]: time="2026-01-24T00:24:59.804440818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2774316df6dac2dea3fc9e28a13f4a3b6a828aaa2a650e9c7f6d17dccf64a12\"" Jan 24 00:24:59.808851 kubelet[2196]: E0124 00:24:59.808744 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:24:59.814281 containerd[1460]: time="2026-01-24T00:24:59.813464073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eaf9e2e0580393f127d8afc00cdbc1dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cbfb4757a31285143d9c5dfb32f53decd7b7064b14ab46b49fed90e1ff953de\"" Jan 24 00:24:59.815333 containerd[1460]: time="2026-01-24T00:24:59.814696615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"67d2e11f15e2ac99ba1ce615797ab632d33ff4b2dbc36ecbcea11a13e2289781\"" Jan 24 00:24:59.815392 kubelet[2196]: E0124 00:24:59.815013 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:24:59.818504 kubelet[2196]: E0124 00:24:59.818304 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:24:59.818656 containerd[1460]: time="2026-01-24T00:24:59.818341941Z" level=info msg="CreateContainer within sandbox \"f2774316df6dac2dea3fc9e28a13f4a3b6a828aaa2a650e9c7f6d17dccf64a12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:24:59.823355 containerd[1460]: time="2026-01-24T00:24:59.823229480Z" level=info msg="CreateContainer within sandbox \"8cbfb4757a31285143d9c5dfb32f53decd7b7064b14ab46b49fed90e1ff953de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:24:59.826358 containerd[1460]: time="2026-01-24T00:24:59.826276506Z" level=info msg="CreateContainer within sandbox \"67d2e11f15e2ac99ba1ce615797ab632d33ff4b2dbc36ecbcea11a13e2289781\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:24:59.843532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820967155.mount: Deactivated successfully. Jan 24 00:24:59.851169 containerd[1460]: time="2026-01-24T00:24:59.851030827Z" level=info msg="CreateContainer within sandbox \"f2774316df6dac2dea3fc9e28a13f4a3b6a828aaa2a650e9c7f6d17dccf64a12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3da633c06077da53973b0dfc8fb051cb247c41c6f0cab05e1cfb84e8cd57226d\"" Jan 24 00:24:59.858324 containerd[1460]: time="2026-01-24T00:24:59.852887670Z" level=info msg="StartContainer for \"3da633c06077da53973b0dfc8fb051cb247c41c6f0cab05e1cfb84e8cd57226d\"" Jan 24 00:24:59.873858 containerd[1460]: time="2026-01-24T00:24:59.873745328Z" level=info msg="CreateContainer within sandbox \"8cbfb4757a31285143d9c5dfb32f53decd7b7064b14ab46b49fed90e1ff953de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a72c74719ab5d81a7f42a7e1199af338ce62fe957ed42075b7aa0cd261bd446\"" Jan 24 00:24:59.874704 containerd[1460]: time="2026-01-24T00:24:59.874672471Z" level=info msg="StartContainer for \"9a72c74719ab5d81a7f42a7e1199af338ce62fe957ed42075b7aa0cd261bd446\"" Jan 24 00:24:59.877324 containerd[1460]: time="2026-01-24T00:24:59.877243126Z" level=info msg="CreateContainer within sandbox \"67d2e11f15e2ac99ba1ce615797ab632d33ff4b2dbc36ecbcea11a13e2289781\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7359e5b45a5df6082f32b3dc802fb419d2718e1a9849078a6e7ed3e88dc367f\"" Jan 24 00:24:59.877796 containerd[1460]: time="2026-01-24T00:24:59.877747484Z" level=info msg="StartContainer for \"b7359e5b45a5df6082f32b3dc802fb419d2718e1a9849078a6e7ed3e88dc367f\"" Jan 24 00:24:59.888838 kubelet[2196]: E0124 00:24:59.888771 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:24:59.936936 systemd[1]: Started cri-containerd-3da633c06077da53973b0dfc8fb051cb247c41c6f0cab05e1cfb84e8cd57226d.scope - libcontainer container 3da633c06077da53973b0dfc8fb051cb247c41c6f0cab05e1cfb84e8cd57226d. Jan 24 00:24:59.951017 systemd[1]: Started cri-containerd-9a72c74719ab5d81a7f42a7e1199af338ce62fe957ed42075b7aa0cd261bd446.scope - libcontainer container 9a72c74719ab5d81a7f42a7e1199af338ce62fe957ed42075b7aa0cd261bd446. Jan 24 00:24:59.967361 systemd[1]: Started cri-containerd-b7359e5b45a5df6082f32b3dc802fb419d2718e1a9849078a6e7ed3e88dc367f.scope - libcontainer container b7359e5b45a5df6082f32b3dc802fb419d2718e1a9849078a6e7ed3e88dc367f. Jan 24 00:24:59.977033 kubelet[2196]: E0124 00:24:59.976805 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="3.2s" Jan 24 00:25:00.158144 kubelet[2196]: E0124 00:25:00.133187 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:25:00.625494 containerd[1460]: time="2026-01-24T00:25:00.624644415Z" level=info msg="StartContainer for \"3da633c06077da53973b0dfc8fb051cb247c41c6f0cab05e1cfb84e8cd57226d\" returns successfully" Jan 24 00:25:00.628980 kubelet[2196]: I0124 00:25:00.628908 2196 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:25:00.630262 kubelet[2196]: E0124 00:25:00.630154 2196 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 24 00:25:00.694674 containerd[1460]: time="2026-01-24T00:25:00.689485983Z" level=info msg="StartContainer for \"b7359e5b45a5df6082f32b3dc802fb419d2718e1a9849078a6e7ed3e88dc367f\" returns successfully" Jan 24 00:25:00.713510 containerd[1460]: time="2026-01-24T00:25:00.713397964Z" level=info msg="StartContainer for \"9a72c74719ab5d81a7f42a7e1199af338ce62fe957ed42075b7aa0cd261bd446\" returns successfully" Jan 24 00:25:00.719323 kubelet[2196]: E0124 00:25:00.719257 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:25:00.719489 kubelet[2196]: E0124 00:25:00.719442 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:00.802951 kubelet[2196]: E0124 00:25:00.802781 2196 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:25:00.804630 kubelet[2196]: E0124 00:25:00.803062 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:25:00.804630 kubelet[2196]: E0124 00:25:00.803872 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:01.754892 kubelet[2196]: E0124 00:25:01.754718 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:25:01.754892 kubelet[2196]: E0124 00:25:01.755149 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:01.757543 kubelet[2196]: E0124 00:25:01.755529 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:25:01.757543 kubelet[2196]: E0124 00:25:01.756246 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:02.979423 kubelet[2196]: E0124 00:25:02.979174 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:25:02.979423 kubelet[2196]: E0124 00:25:02.979770 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:03.840750 kubelet[2196]: I0124 00:25:03.839999 2196 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:25:03.970763 kubelet[2196]: E0124 00:25:03.970688 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:25:03.971261 kubelet[2196]: E0124 00:25:03.971169 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:04.625301 kubelet[2196]: E0124 00:25:04.624969 2196 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:25:04.625301 kubelet[2196]: E0124 00:25:04.625489 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:04.990404 kubelet[2196]: E0124 00:25:04.989765 2196 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:25:05.057677 kubelet[2196]: I0124 00:25:05.057517 2196 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:25:05.057677 kubelet[2196]: E0124 00:25:05.057622 2196 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:25:05.088095 kubelet[2196]: E0124 00:25:05.087790 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:25:05.188259 kubelet[2196]: E0124 00:25:05.188116 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:25:05.289507 kubelet[2196]: E0124 00:25:05.289197 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:25:05.390690 kubelet[2196]: E0124 00:25:05.390450 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:25:05.491321 kubelet[2196]: E0124 00:25:05.491192 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:25:05.592411 kubelet[2196]: E0124 00:25:05.592241 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:25:05.693262 kubelet[2196]: E0124 00:25:05.693128 2196 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:25:05.825645 kubelet[2196]: I0124 00:25:05.825456 2196 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:25:05.835312 kubelet[2196]: E0124 00:25:05.835180 2196 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:25:05.835312 kubelet[2196]: I0124 00:25:05.835255 2196 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:05.838155 kubelet[2196]: E0124 00:25:05.837939 2196 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:05.838155 kubelet[2196]: I0124 00:25:05.837962 2196 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:05.840306 kubelet[2196]: E0124 00:25:05.840234 2196 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:05.918795 kubelet[2196]: I0124 00:25:05.916725 2196 apiserver.go:52] "Watching apiserver" Jan 24 00:25:05.924349 kubelet[2196]: I0124 00:25:05.924249 2196 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:25:07.759430 systemd[1]: Reloading requested from client PID 2485 ('systemctl') (unit session-9.scope)... Jan 24 00:25:07.759492 systemd[1]: Reloading... Jan 24 00:25:08.690662 zram_generator::config[2528]: No configuration found. Jan 24 00:25:08.775153 kubelet[2196]: I0124 00:25:08.774819 2196 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:08.792152 kubelet[2196]: E0124 00:25:08.790256 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:08.918964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:25:09.421159 kubelet[2196]: E0124 00:25:09.420483 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:13.124933 kubelet[2196]: I0124 00:25:13.121420 2196 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:13.139657 kubelet[2196]: E0124 00:25:13.139164 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:13.155507 systemd[1]: Reloading finished in 5395 ms. Jan 24 00:25:13.181618 kubelet[2196]: I0124 00:25:13.178887 2196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.178864123 podStartE2EDuration="5.178864123s" podCreationTimestamp="2026-01-24 00:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:13.178472026 +0000 UTC m=+17.877995382" watchObservedRunningTime="2026-01-24 00:25:13.178864123 +0000 UTC m=+17.878387400" Jan 24 00:25:13.295168 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:25:13.326281 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:25:13.327134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:25:13.327222 systemd[1]: kubelet.service: Consumed 8.059s CPU time, 133.9M memory peak, 0B memory swap peak. Jan 24 00:25:13.365219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:25:14.104219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:25:14.148744 (kubelet)[2571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:25:14.222103 kubelet[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:25:14.222103 kubelet[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:25:14.222103 kubelet[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:25:14.222103 kubelet[2571]: I0124 00:25:14.222051 2571 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:25:14.234762 kubelet[2571]: I0124 00:25:14.234722 2571 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:25:14.234762 kubelet[2571]: I0124 00:25:14.234744 2571 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:25:14.235177 kubelet[2571]: I0124 00:25:14.234967 2571 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:25:14.236797 kubelet[2571]: I0124 00:25:14.236674 2571 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:25:14.246024 kubelet[2571]: I0124 00:25:14.243850 2571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:25:14.255192 kubelet[2571]: E0124 00:25:14.254938 2571 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:25:14.255192 kubelet[2571]: I0124 00:25:14.255077 2571 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:25:14.264510 kubelet[2571]: I0124 00:25:14.264368 2571 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:25:14.264971 kubelet[2571]: I0124 00:25:14.264899 2571 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:25:14.265212 kubelet[2571]: I0124 00:25:14.264962 2571 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:25:14.265212 kubelet[2571]: I0124 00:25:14.265210 2571 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:25:14.265354 kubelet[2571]: I0124 00:25:14.265228 2571 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:25:14.265354 kubelet[2571]: I0124 00:25:14.265347 2571 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:25:14.265947 kubelet[2571]: I0124 00:25:14.265727 2571 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:25:14.265947 kubelet[2571]: I0124 00:25:14.265765 2571 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:25:14.265947 kubelet[2571]: I0124 00:25:14.265811 2571 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:25:14.265947 kubelet[2571]: I0124 00:25:14.265848 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:25:14.270007 kubelet[2571]: I0124 00:25:14.269827 2571 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:25:14.270388 kubelet[2571]: I0124 00:25:14.270289 2571 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:25:14.275948 kubelet[2571]: I0124 00:25:14.275921 2571 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:25:14.276052 kubelet[2571]: I0124 00:25:14.275974 2571 server.go:1289] "Started kubelet" Jan 24 00:25:14.278512 kubelet[2571]: I0124 00:25:14.277821 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:25:14.278512 kubelet[2571]: I0124 00:25:14.277920 2571 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:25:14.278512 kubelet[2571]: I0124 00:25:14.278410 2571 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:25:14.292145 kubelet[2571]: I0124 00:25:14.292061 2571 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:25:14.294331 kubelet[2571]: I0124 00:25:14.293815 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:25:14.294729 kubelet[2571]: E0124 00:25:14.294652 2571 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:25:14.295843 kubelet[2571]: I0124 00:25:14.295242 2571 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:25:14.297647 kubelet[2571]: I0124 00:25:14.296773 2571 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:25:14.299206 kubelet[2571]: I0124 00:25:14.298519 2571 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:25:14.299206 kubelet[2571]: I0124 00:25:14.298822 2571 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:25:14.301936 kubelet[2571]: I0124 00:25:14.300898 2571 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:25:14.306813 kubelet[2571]: I0124 00:25:14.306716 2571 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:25:14.306813 kubelet[2571]: I0124 00:25:14.306786 2571 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:25:14.330156 kubelet[2571]: I0124 00:25:14.330111 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:25:14.336699 kubelet[2571]: I0124 00:25:14.336272 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:25:14.336699 kubelet[2571]: I0124 00:25:14.336292 2571 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:25:14.336699 kubelet[2571]: I0124 00:25:14.336311 2571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:25:14.336699 kubelet[2571]: I0124 00:25:14.336318 2571 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:25:14.336699 kubelet[2571]: E0124 00:25:14.336368 2571 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:25:14.390216 kubelet[2571]: I0124 00:25:14.389970 2571 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:25:14.390216 kubelet[2571]: I0124 00:25:14.390050 2571 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:25:14.390216 kubelet[2571]: I0124 00:25:14.390079 2571 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:25:14.390430 kubelet[2571]: I0124 00:25:14.390302 2571 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:25:14.390430 kubelet[2571]: I0124 00:25:14.390316 2571 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:25:14.390430 kubelet[2571]: I0124 00:25:14.390360 2571 policy_none.go:49] "None policy: Start" Jan 24 00:25:14.390430 kubelet[2571]: I0124 00:25:14.390375 2571 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:25:14.390430 kubelet[2571]: I0124 00:25:14.390389 2571 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:25:14.390791 kubelet[2571]: I0124 00:25:14.390496 2571 state_mem.go:75] "Updated machine memory state" Jan 24 00:25:14.400770 kubelet[2571]: E0124 00:25:14.399787 2571 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:25:14.400770 kubelet[2571]: I0124 00:25:14.400281 2571 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:25:14.400770 kubelet[2571]: I0124 00:25:14.400298 2571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:25:14.401541 kubelet[2571]: I0124 00:25:14.401522 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:25:14.410668 kubelet[2571]: E0124 00:25:14.408389 2571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:25:14.438134 kubelet[2571]: I0124 00:25:14.438080 2571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:25:14.438360 kubelet[2571]: I0124 00:25:14.438284 2571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:14.439305 kubelet[2571]: I0124 00:25:14.439269 2571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:14.458123 kubelet[2571]: E0124 00:25:14.457922 2571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:14.458123 kubelet[2571]: E0124 00:25:14.457962 2571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:14.549272 kubelet[2571]: I0124 00:25:14.548753 2571 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:25:14.575641 kubelet[2571]: I0124 00:25:14.572902 2571 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:25:14.587080 kubelet[2571]: I0124 00:25:14.586331 2571 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:25:14.610362 kubelet[2571]: I0124 00:25:14.608242 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eaf9e2e0580393f127d8afc00cdbc1dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf9e2e0580393f127d8afc00cdbc1dc\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:14.613903 kubelet[2571]: I0124 00:25:14.613543 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eaf9e2e0580393f127d8afc00cdbc1dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eaf9e2e0580393f127d8afc00cdbc1dc\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:14.613903 kubelet[2571]: I0124 00:25:14.613670 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:14.613903 kubelet[2571]: I0124 00:25:14.613707 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:14.613903 kubelet[2571]: I0124 00:25:14.613732 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eaf9e2e0580393f127d8afc00cdbc1dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf9e2e0580393f127d8afc00cdbc1dc\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:25:14.613903 kubelet[2571]: I0124 00:25:14.613762 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:14.614462 kubelet[2571]: I0124 00:25:14.613784 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:14.614462 kubelet[2571]: I0124 00:25:14.613809 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:25:14.614462 kubelet[2571]: I0124 00:25:14.613836 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:25:14.752331 kubelet[2571]: E0124 00:25:14.751920 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:14.768134 kubelet[2571]: E0124 00:25:14.767870 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:14.768134 kubelet[2571]: E0124 00:25:14.767865 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:15.271707 kubelet[2571]: I0124 00:25:15.270663 2571 apiserver.go:52] "Watching apiserver" Jan 24 00:25:15.358093 kubelet[2571]: E0124 00:25:15.357864 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:15.359113 kubelet[2571]: E0124 00:25:15.359084 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:15.360889 kubelet[2571]: E0124 00:25:15.360817 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:15.373652 kubelet[2571]: I0124 00:25:15.373254 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.373239298 podStartE2EDuration="1.373239298s" podCreationTimestamp="2026-01-24 00:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:15.373041541 +0000 UTC m=+1.213575181" watchObservedRunningTime="2026-01-24 00:25:15.373239298 +0000 UTC m=+1.213772940" Jan 24 00:25:15.399344 kubelet[2571]: I0124 00:25:15.399286 2571 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:25:16.233037 kubelet[2571]: I0124 00:25:16.232333 2571 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:25:16.235553 containerd[1460]: time="2026-01-24T00:25:16.235355208Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:25:16.236796 kubelet[2571]: I0124 00:25:16.235861 2571 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:25:16.360370 kubelet[2571]: E0124 00:25:16.360116 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:16.360370 kubelet[2571]: E0124 00:25:16.360249 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:17.329475 systemd[1]: Created slice kubepods-besteffort-podbca40bc3_c261_4217_b90a_ce4c8eedad48.slice - libcontainer container kubepods-besteffort-podbca40bc3_c261_4217_b90a_ce4c8eedad48.slice. Jan 24 00:25:17.384442 systemd[1]: Created slice kubepods-besteffort-podbaa20c75_97cc_4221_977a_d55319e085f0.slice - libcontainer container kubepods-besteffort-podbaa20c75_97cc_4221_977a_d55319e085f0.slice. Jan 24 00:25:17.410260 kubelet[2571]: I0124 00:25:17.410193 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bca40bc3-c261-4217-b90a-ce4c8eedad48-kube-proxy\") pod \"kube-proxy-rpqsz\" (UID: \"bca40bc3-c261-4217-b90a-ce4c8eedad48\") " pod="kube-system/kube-proxy-rpqsz" Jan 24 00:25:17.410260 kubelet[2571]: I0124 00:25:17.410227 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bca40bc3-c261-4217-b90a-ce4c8eedad48-xtables-lock\") pod \"kube-proxy-rpqsz\" (UID: \"bca40bc3-c261-4217-b90a-ce4c8eedad48\") " pod="kube-system/kube-proxy-rpqsz" Jan 24 00:25:17.410260 kubelet[2571]: I0124 00:25:17.410244 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bca40bc3-c261-4217-b90a-ce4c8eedad48-lib-modules\") pod \"kube-proxy-rpqsz\" (UID: \"bca40bc3-c261-4217-b90a-ce4c8eedad48\") " pod="kube-system/kube-proxy-rpqsz" Jan 24 00:25:17.410260 kubelet[2571]: I0124 00:25:17.410260 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89s8\" (UniqueName: \"kubernetes.io/projected/bca40bc3-c261-4217-b90a-ce4c8eedad48-kube-api-access-t89s8\") pod \"kube-proxy-rpqsz\" (UID: \"bca40bc3-c261-4217-b90a-ce4c8eedad48\") " pod="kube-system/kube-proxy-rpqsz" Jan 24 00:25:17.511235 kubelet[2571]: I0124 00:25:17.511078 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2c52\" (UniqueName: \"kubernetes.io/projected/baa20c75-97cc-4221-977a-d55319e085f0-kube-api-access-w2c52\") pod \"tigera-operator-7dcd859c48-jj2pp\" (UID: \"baa20c75-97cc-4221-977a-d55319e085f0\") " pod="tigera-operator/tigera-operator-7dcd859c48-jj2pp" Jan 24 00:25:17.511235 kubelet[2571]: I0124 00:25:17.511145 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/baa20c75-97cc-4221-977a-d55319e085f0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jj2pp\" (UID: \"baa20c75-97cc-4221-977a-d55319e085f0\") " pod="tigera-operator/tigera-operator-7dcd859c48-jj2pp" Jan 24 00:25:17.643875 kubelet[2571]: E0124 00:25:17.643471 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:17.653343 containerd[1460]: time="2026-01-24T00:25:17.651325847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpqsz,Uid:bca40bc3-c261-4217-b90a-ce4c8eedad48,Namespace:kube-system,Attempt:0,}" Jan 24 00:25:17.694227 containerd[1460]: time="2026-01-24T00:25:17.694078060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jj2pp,Uid:baa20c75-97cc-4221-977a-d55319e085f0,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:25:17.766104 containerd[1460]: time="2026-01-24T00:25:17.765146954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:17.766104 containerd[1460]: time="2026-01-24T00:25:17.765468131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:17.766104 containerd[1460]: time="2026-01-24T00:25:17.765516101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:17.776189 containerd[1460]: time="2026-01-24T00:25:17.769707793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:17.779117 containerd[1460]: time="2026-01-24T00:25:17.777126304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:17.779117 containerd[1460]: time="2026-01-24T00:25:17.777382370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:17.779117 containerd[1460]: time="2026-01-24T00:25:17.777419388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:17.779117 containerd[1460]: time="2026-01-24T00:25:17.778115473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:17.818836 systemd[1]: Started cri-containerd-f522a7ebeafdf94ddb0a91fb8fd25ae0406920d46c44073e41844d1a1d7e4eb8.scope - libcontainer container f522a7ebeafdf94ddb0a91fb8fd25ae0406920d46c44073e41844d1a1d7e4eb8. Jan 24 00:25:17.824400 systemd[1]: Started cri-containerd-be53936459eb3d6c28f1e36b3c46d48bbf299ef99d773560807861ad1a66753c.scope - libcontainer container be53936459eb3d6c28f1e36b3c46d48bbf299ef99d773560807861ad1a66753c. Jan 24 00:25:17.893296 containerd[1460]: time="2026-01-24T00:25:17.893239838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpqsz,Uid:bca40bc3-c261-4217-b90a-ce4c8eedad48,Namespace:kube-system,Attempt:0,} returns sandbox id \"be53936459eb3d6c28f1e36b3c46d48bbf299ef99d773560807861ad1a66753c\"" Jan 24 00:25:17.896774 kubelet[2571]: E0124 00:25:17.896512 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:17.918548 containerd[1460]: time="2026-01-24T00:25:17.918327392Z" level=info msg="CreateContainer within sandbox \"be53936459eb3d6c28f1e36b3c46d48bbf299ef99d773560807861ad1a66753c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:25:17.918548 containerd[1460]: time="2026-01-24T00:25:17.918622163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jj2pp,Uid:baa20c75-97cc-4221-977a-d55319e085f0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f522a7ebeafdf94ddb0a91fb8fd25ae0406920d46c44073e41844d1a1d7e4eb8\"" Jan 24 00:25:17.939172 containerd[1460]: time="2026-01-24T00:25:17.939077123Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:25:17.990291 containerd[1460]: time="2026-01-24T00:25:17.990148047Z" level=info msg="CreateContainer within sandbox \"be53936459eb3d6c28f1e36b3c46d48bbf299ef99d773560807861ad1a66753c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9cf90745b50080ef436156a8d20bcfadf6c519edef6b5d3cc0e4b1b5737a521b\"" Jan 24 00:25:17.992522 containerd[1460]: time="2026-01-24T00:25:17.992444446Z" level=info msg="StartContainer for \"9cf90745b50080ef436156a8d20bcfadf6c519edef6b5d3cc0e4b1b5737a521b\"" Jan 24 00:25:18.096235 systemd[1]: Started cri-containerd-9cf90745b50080ef436156a8d20bcfadf6c519edef6b5d3cc0e4b1b5737a521b.scope - libcontainer container 9cf90745b50080ef436156a8d20bcfadf6c519edef6b5d3cc0e4b1b5737a521b. Jan 24 00:25:18.164492 containerd[1460]: time="2026-01-24T00:25:18.164367022Z" level=info msg="StartContainer for \"9cf90745b50080ef436156a8d20bcfadf6c519edef6b5d3cc0e4b1b5737a521b\" returns successfully" Jan 24 00:25:18.389289 kubelet[2571]: E0124 00:25:18.388752 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:18.453382 kubelet[2571]: I0124 00:25:18.447911 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rpqsz" podStartSLOduration=1.447865385 podStartE2EDuration="1.447865385s" podCreationTimestamp="2026-01-24 00:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:18.446427346 +0000 UTC m=+4.286961007" watchObservedRunningTime="2026-01-24 00:25:18.447865385 +0000 UTC m=+4.288399026" Jan 24 00:25:18.882040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173513387.mount: Deactivated successfully. Jan 24 00:25:19.563467 kubelet[2571]: E0124 00:25:19.562911 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:20.030642 containerd[1460]: time="2026-01-24T00:25:20.030393125Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:20.031635 containerd[1460]: time="2026-01-24T00:25:20.031518434Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:25:20.033217 containerd[1460]: time="2026-01-24T00:25:20.033159822Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:20.039540 containerd[1460]: time="2026-01-24T00:25:20.039129048Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:20.040615 containerd[1460]: time="2026-01-24T00:25:20.040481745Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.101335586s" Jan 24 00:25:20.041252 containerd[1460]: time="2026-01-24T00:25:20.040552507Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:25:20.056239 containerd[1460]: time="2026-01-24T00:25:20.056119395Z" level=info msg="CreateContainer within sandbox \"f522a7ebeafdf94ddb0a91fb8fd25ae0406920d46c44073e41844d1a1d7e4eb8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:25:20.112271 containerd[1460]: time="2026-01-24T00:25:20.111890108Z" level=info msg="CreateContainer within sandbox \"f522a7ebeafdf94ddb0a91fb8fd25ae0406920d46c44073e41844d1a1d7e4eb8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f8aa03ee039763894249a5208d873ee21e00ceb91218f09487b566798deb1cd\"" Jan 24 00:25:20.114416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45853464.mount: Deactivated successfully. Jan 24 00:25:20.119303 containerd[1460]: time="2026-01-24T00:25:20.118399385Z" level=info msg="StartContainer for \"2f8aa03ee039763894249a5208d873ee21e00ceb91218f09487b566798deb1cd\"" Jan 24 00:25:20.295403 systemd[1]: Started cri-containerd-2f8aa03ee039763894249a5208d873ee21e00ceb91218f09487b566798deb1cd.scope - libcontainer container 2f8aa03ee039763894249a5208d873ee21e00ceb91218f09487b566798deb1cd. Jan 24 00:25:20.503715 containerd[1460]: time="2026-01-24T00:25:20.499949216Z" level=info msg="StartContainer for \"2f8aa03ee039763894249a5208d873ee21e00ceb91218f09487b566798deb1cd\" returns successfully" Jan 24 00:25:20.504810 kubelet[2571]: E0124 00:25:20.501956 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:21.020359 kubelet[2571]: E0124 00:25:21.020137 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:21.508134 kubelet[2571]: E0124 00:25:21.508049 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:21.510810 kubelet[2571]: E0124 00:25:21.510661 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:21.514374 kubelet[2571]: E0124 00:25:21.514201 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:21.561869 kubelet[2571]: I0124 00:25:21.561431 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jj2pp" podStartSLOduration=2.4564261419999998 podStartE2EDuration="4.56140468s" podCreationTimestamp="2026-01-24 00:25:17 +0000 UTC" firstStartedPulling="2026-01-24 00:25:17.937972679 +0000 UTC m=+3.778506320" lastFinishedPulling="2026-01-24 00:25:20.042951216 +0000 UTC m=+5.883484858" observedRunningTime="2026-01-24 00:25:21.535386993 +0000 UTC m=+7.375920655" watchObservedRunningTime="2026-01-24 00:25:21.56140468 +0000 UTC m=+7.401938340" Jan 24 00:25:22.511289 kubelet[2571]: E0124 00:25:22.511183 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:23.515658 kubelet[2571]: E0124 00:25:23.513996 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:26.818995 sudo[1655]: pam_unix(sudo:session): session closed for user root Jan 24 00:25:26.827181 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 24 00:25:26.837288 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:37194.service: Deactivated successfully. Jan 24 00:25:26.845220 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:25:26.845726 systemd[1]: session-9.scope: Consumed 14.225s CPU time, 163.2M memory peak, 0B memory swap peak. Jan 24 00:25:26.847954 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:25:26.852846 systemd-logind[1443]: Removed session 9. Jan 24 00:25:32.082120 systemd[1]: Created slice kubepods-besteffort-podbe2ad513_d99f_486d_8609_b2191b77d381.slice - libcontainer container kubepods-besteffort-podbe2ad513_d99f_486d_8609_b2191b77d381.slice. Jan 24 00:25:32.094322 kubelet[2571]: I0124 00:25:32.093841 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2ad513-d99f-486d-8609-b2191b77d381-tigera-ca-bundle\") pod \"calico-typha-56cc598847-l7lr2\" (UID: \"be2ad513-d99f-486d-8609-b2191b77d381\") " pod="calico-system/calico-typha-56cc598847-l7lr2" Jan 24 00:25:32.094322 kubelet[2571]: I0124 00:25:32.093884 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkqm9\" (UniqueName: \"kubernetes.io/projected/be2ad513-d99f-486d-8609-b2191b77d381-kube-api-access-kkqm9\") pod \"calico-typha-56cc598847-l7lr2\" (UID: \"be2ad513-d99f-486d-8609-b2191b77d381\") " pod="calico-system/calico-typha-56cc598847-l7lr2" Jan 24 00:25:32.094322 kubelet[2571]: I0124 00:25:32.093903 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/be2ad513-d99f-486d-8609-b2191b77d381-typha-certs\") pod \"calico-typha-56cc598847-l7lr2\" (UID: \"be2ad513-d99f-486d-8609-b2191b77d381\") " pod="calico-system/calico-typha-56cc598847-l7lr2" Jan 24 00:25:32.354416 systemd[1]: Created slice kubepods-besteffort-podb57b3f69_7ed5_4117_9a4c_7111bfdfe23f.slice - libcontainer container kubepods-besteffort-podb57b3f69_7ed5_4117_9a4c_7111bfdfe23f.slice. Jan 24 00:25:32.388482 kubelet[2571]: E0124 00:25:32.388301 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:32.389439 containerd[1460]: time="2026-01-24T00:25:32.389224013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56cc598847-l7lr2,Uid:be2ad513-d99f-486d-8609-b2191b77d381,Namespace:calico-system,Attempt:0,}" Jan 24 00:25:32.436347 containerd[1460]: time="2026-01-24T00:25:32.433533491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:32.436347 containerd[1460]: time="2026-01-24T00:25:32.435704938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:32.436347 containerd[1460]: time="2026-01-24T00:25:32.435722822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:32.439166 containerd[1460]: time="2026-01-24T00:25:32.438758693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:32.481026 systemd[1]: Started cri-containerd-d3859274a9beac13eebcfb9b3730455e65708d0fda5260414d0ee2237ded3d17.scope - libcontainer container d3859274a9beac13eebcfb9b3730455e65708d0fda5260414d0ee2237ded3d17. Jan 24 00:25:32.497090 kubelet[2571]: I0124 00:25:32.496436 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-var-lib-calico\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.500102 kubelet[2571]: I0124 00:25:32.499134 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrxc\" (UniqueName: \"kubernetes.io/projected/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-kube-api-access-hsrxc\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501106 kubelet[2571]: I0124 00:25:32.500267 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-lib-modules\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501106 kubelet[2571]: I0124 00:25:32.500320 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-var-run-calico\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501106 kubelet[2571]: I0124 00:25:32.500354 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-node-certs\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501106 kubelet[2571]: I0124 00:25:32.500381 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-policysync\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501106 kubelet[2571]: I0124 00:25:32.500414 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-cni-bin-dir\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501652 kubelet[2571]: I0124 00:25:32.500444 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-cni-net-dir\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501652 kubelet[2571]: I0124 00:25:32.500465 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-tigera-ca-bundle\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501652 kubelet[2571]: I0124 00:25:32.500494 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-cni-log-dir\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.501652 kubelet[2571]: I0124 00:25:32.500524 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-flexvol-driver-host\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.502098 kubelet[2571]: I0124 00:25:32.501889 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b57b3f69-7ed5-4117-9a4c-7111bfdfe23f-xtables-lock\") pod \"calico-node-d42vs\" (UID: \"b57b3f69-7ed5-4117-9a4c-7111bfdfe23f\") " pod="calico-system/calico-node-d42vs" Jan 24 00:25:32.548247 kubelet[2571]: E0124 00:25:32.548139 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:32.588313 containerd[1460]: time="2026-01-24T00:25:32.588219330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56cc598847-l7lr2,Uid:be2ad513-d99f-486d-8609-b2191b77d381,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3859274a9beac13eebcfb9b3730455e65708d0fda5260414d0ee2237ded3d17\"" Jan 24 00:25:32.594761 kubelet[2571]: E0124 00:25:32.594684 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:32.597410 containerd[1460]: time="2026-01-24T00:25:32.597268013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:25:32.603151 kubelet[2571]: I0124 00:25:32.602978 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/677a3c6a-a428-4746-be4d-2080a36b4930-kubelet-dir\") pod \"csi-node-driver-grfd7\" (UID: \"677a3c6a-a428-4746-be4d-2080a36b4930\") " pod="calico-system/csi-node-driver-grfd7" Jan 24 00:25:32.603292 kubelet[2571]: I0124 00:25:32.603155 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/677a3c6a-a428-4746-be4d-2080a36b4930-registration-dir\") pod \"csi-node-driver-grfd7\" (UID: \"677a3c6a-a428-4746-be4d-2080a36b4930\") " pod="calico-system/csi-node-driver-grfd7" Jan 24 00:25:32.603292 kubelet[2571]: I0124 00:25:32.603184 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/677a3c6a-a428-4746-be4d-2080a36b4930-socket-dir\") pod \"csi-node-driver-grfd7\" (UID: \"677a3c6a-a428-4746-be4d-2080a36b4930\") " pod="calico-system/csi-node-driver-grfd7" Jan 24 00:25:32.603436 kubelet[2571]: I0124 00:25:32.603303 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/677a3c6a-a428-4746-be4d-2080a36b4930-varrun\") pod \"csi-node-driver-grfd7\" (UID: \"677a3c6a-a428-4746-be4d-2080a36b4930\") " pod="calico-system/csi-node-driver-grfd7" Jan 24 00:25:32.603436 kubelet[2571]: I0124 00:25:32.603337 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwblb\" (UniqueName: \"kubernetes.io/projected/677a3c6a-a428-4746-be4d-2080a36b4930-kube-api-access-vwblb\") pod \"csi-node-driver-grfd7\" (UID: \"677a3c6a-a428-4746-be4d-2080a36b4930\") " pod="calico-system/csi-node-driver-grfd7" Jan 24 00:25:32.606441 kubelet[2571]: E0124 00:25:32.605755 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.606441 kubelet[2571]: W0124 00:25:32.605782 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.606441 kubelet[2571]: E0124 00:25:32.605845 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.607019 kubelet[2571]: E0124 00:25:32.606914 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.607019 kubelet[2571]: W0124 00:25:32.606933 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.607019 kubelet[2571]: E0124 00:25:32.606950 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.608153 kubelet[2571]: E0124 00:25:32.608027 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.608463 kubelet[2571]: W0124 00:25:32.608404 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.608463 kubelet[2571]: E0124 00:25:32.608454 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.609759 kubelet[2571]: E0124 00:25:32.609659 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.609834 kubelet[2571]: W0124 00:25:32.609820 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.609886 kubelet[2571]: E0124 00:25:32.609838 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.615446 kubelet[2571]: E0124 00:25:32.615414 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.616453 kubelet[2571]: W0124 00:25:32.616395 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.616544 kubelet[2571]: E0124 00:25:32.616460 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.617191 kubelet[2571]: E0124 00:25:32.617117 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.617191 kubelet[2571]: W0124 00:25:32.617169 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.617191 kubelet[2571]: E0124 00:25:32.617190 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.622171 kubelet[2571]: E0124 00:25:32.622125 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.622171 kubelet[2571]: W0124 00:25:32.622159 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.622375 kubelet[2571]: E0124 00:25:32.622184 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.659472 kubelet[2571]: E0124 00:25:32.659375 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:32.660808 containerd[1460]: time="2026-01-24T00:25:32.660121846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d42vs,Uid:b57b3f69-7ed5-4117-9a4c-7111bfdfe23f,Namespace:calico-system,Attempt:0,}" Jan 24 00:25:32.704702 kubelet[2571]: E0124 00:25:32.704491 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.704702 kubelet[2571]: W0124 00:25:32.704640 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.704702 kubelet[2571]: E0124 00:25:32.704671 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.705521 kubelet[2571]: E0124 00:25:32.705371 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.705521 kubelet[2571]: W0124 00:25:32.705427 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.705521 kubelet[2571]: E0124 00:25:32.705450 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.706781 kubelet[2571]: E0124 00:25:32.706723 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.706781 kubelet[2571]: W0124 00:25:32.706780 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.707025 kubelet[2571]: E0124 00:25:32.706805 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.710392 kubelet[2571]: E0124 00:25:32.710318 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.710392 kubelet[2571]: W0124 00:25:32.710372 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.710392 kubelet[2571]: E0124 00:25:32.710394 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.715236 kubelet[2571]: E0124 00:25:32.714877 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.715236 kubelet[2571]: W0124 00:25:32.714946 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.715236 kubelet[2571]: E0124 00:25:32.714987 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.716755 kubelet[2571]: E0124 00:25:32.715908 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.716755 kubelet[2571]: W0124 00:25:32.715994 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.716755 kubelet[2571]: E0124 00:25:32.716014 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.716755 kubelet[2571]: E0124 00:25:32.716754 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.716926 kubelet[2571]: W0124 00:25:32.716771 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.716926 kubelet[2571]: E0124 00:25:32.716795 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.718418 kubelet[2571]: E0124 00:25:32.718371 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.718418 kubelet[2571]: W0124 00:25:32.718393 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.718418 kubelet[2571]: E0124 00:25:32.718411 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.720645 kubelet[2571]: E0124 00:25:32.719528 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.720645 kubelet[2571]: W0124 00:25:32.719550 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.720645 kubelet[2571]: E0124 00:25:32.719661 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.720973 kubelet[2571]: E0124 00:25:32.720789 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.720973 kubelet[2571]: W0124 00:25:32.720847 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.720973 kubelet[2571]: E0124 00:25:32.720865 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.722229 kubelet[2571]: E0124 00:25:32.721463 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.722229 kubelet[2571]: W0124 00:25:32.721482 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.722229 kubelet[2571]: E0124 00:25:32.721497 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.722681 kubelet[2571]: E0124 00:25:32.722251 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.722681 kubelet[2571]: W0124 00:25:32.722269 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.722681 kubelet[2571]: E0124 00:25:32.722282 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.725334 kubelet[2571]: E0124 00:25:32.723108 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.725334 kubelet[2571]: W0124 00:25:32.723128 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.725334 kubelet[2571]: E0124 00:25:32.723141 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.725334 kubelet[2571]: E0124 00:25:32.723881 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.725334 kubelet[2571]: W0124 00:25:32.723894 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.725334 kubelet[2571]: E0124 00:25:32.723907 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.725334 kubelet[2571]: E0124 00:25:32.724542 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.725334 kubelet[2571]: W0124 00:25:32.724648 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.725334 kubelet[2571]: E0124 00:25:32.724667 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.725334 kubelet[2571]: E0124 00:25:32.725344 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.726037 kubelet[2571]: W0124 00:25:32.725357 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.726037 kubelet[2571]: E0124 00:25:32.725369 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.726037 kubelet[2571]: E0124 00:25:32.725765 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.726037 kubelet[2571]: W0124 00:25:32.725781 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.726037 kubelet[2571]: E0124 00:25:32.725797 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.726688 kubelet[2571]: E0124 00:25:32.726524 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.726688 kubelet[2571]: W0124 00:25:32.726667 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.726688 kubelet[2571]: E0124 00:25:32.726686 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.727231 kubelet[2571]: E0124 00:25:32.727126 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.727231 kubelet[2571]: W0124 00:25:32.727178 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.727231 kubelet[2571]: E0124 00:25:32.727194 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.728924 kubelet[2571]: E0124 00:25:32.728809 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.728924 kubelet[2571]: W0124 00:25:32.728875 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.728924 kubelet[2571]: E0124 00:25:32.728892 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.730919 kubelet[2571]: E0124 00:25:32.730439 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.730919 kubelet[2571]: W0124 00:25:32.730459 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.730919 kubelet[2571]: E0124 00:25:32.730474 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.731501 kubelet[2571]: E0124 00:25:32.730996 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.731501 kubelet[2571]: W0124 00:25:32.731010 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.731501 kubelet[2571]: E0124 00:25:32.731025 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.732523 kubelet[2571]: E0124 00:25:32.732404 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.732523 kubelet[2571]: W0124 00:25:32.732450 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.732523 kubelet[2571]: E0124 00:25:32.732465 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.733046 kubelet[2571]: E0124 00:25:32.732962 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.733046 kubelet[2571]: W0124 00:25:32.733004 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.733046 kubelet[2571]: E0124 00:25:32.733019 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.733545 kubelet[2571]: E0124 00:25:32.733464 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.733545 kubelet[2571]: W0124 00:25:32.733502 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.733545 kubelet[2571]: E0124 00:25:32.733514 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.755781 kubelet[2571]: E0124 00:25:32.754924 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:25:32.755781 kubelet[2571]: W0124 00:25:32.754947 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:25:32.755781 kubelet[2571]: E0124 00:25:32.754969 2571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:25:32.757145 containerd[1460]: time="2026-01-24T00:25:32.756941300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:32.757270 containerd[1460]: time="2026-01-24T00:25:32.757158345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:32.757547 containerd[1460]: time="2026-01-24T00:25:32.757288306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:32.762343 containerd[1460]: time="2026-01-24T00:25:32.760796666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:32.794819 systemd[1]: Started cri-containerd-73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b.scope - libcontainer container 73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b. Jan 24 00:25:32.847751 containerd[1460]: time="2026-01-24T00:25:32.847660805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d42vs,Uid:b57b3f69-7ed5-4117-9a4c-7111bfdfe23f,Namespace:calico-system,Attempt:0,} returns sandbox id \"73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b\"" Jan 24 00:25:32.855172 kubelet[2571]: E0124 00:25:32.855047 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:34.338722 kubelet[2571]: E0124 00:25:34.338547 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:34.568036 containerd[1460]: time="2026-01-24T00:25:34.567845379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:34.573509 containerd[1460]: time="2026-01-24T00:25:34.573342597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:25:34.577056 containerd[1460]: time="2026-01-24T00:25:34.577018629Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:34.586386 containerd[1460]: time="2026-01-24T00:25:34.583327920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:34.590533 containerd[1460]: time="2026-01-24T00:25:34.590347304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.992991027s" Jan 24 00:25:34.590533 containerd[1460]: time="2026-01-24T00:25:34.590435780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:25:34.593471 containerd[1460]: time="2026-01-24T00:25:34.593360459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:25:34.626815 containerd[1460]: time="2026-01-24T00:25:34.626663632Z" level=info msg="CreateContainer within sandbox \"d3859274a9beac13eebcfb9b3730455e65708d0fda5260414d0ee2237ded3d17\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:25:34.655802 containerd[1460]: time="2026-01-24T00:25:34.655215413Z" level=info msg="CreateContainer within sandbox \"d3859274a9beac13eebcfb9b3730455e65708d0fda5260414d0ee2237ded3d17\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"61c23cecfdfa4a7b970b7967fab0b64da0952083879dad4f2ee1937593ce1753\"" Jan 24 00:25:34.657401 containerd[1460]: time="2026-01-24T00:25:34.657350573Z" level=info msg="StartContainer for \"61c23cecfdfa4a7b970b7967fab0b64da0952083879dad4f2ee1937593ce1753\"" Jan 24 00:25:34.714795 systemd[1]: Started cri-containerd-61c23cecfdfa4a7b970b7967fab0b64da0952083879dad4f2ee1937593ce1753.scope - libcontainer container 61c23cecfdfa4a7b970b7967fab0b64da0952083879dad4f2ee1937593ce1753. Jan 24 00:25:34.797056 containerd[1460]: time="2026-01-24T00:25:34.796385704Z" level=info msg="StartContainer for \"61c23cecfdfa4a7b970b7967fab0b64da0952083879dad4f2ee1937593ce1753\" returns successfully" Jan 24 00:25:35.303904 containerd[1460]: time="2026-01-24T00:25:35.303459215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:35.305900 containerd[1460]: time="2026-01-24T00:25:35.305768775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:25:35.308121 containerd[1460]: time="2026-01-24T00:25:35.308014851Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:35.311330 containerd[1460]: time="2026-01-24T00:25:35.311005624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:35.312395 containerd[1460]: time="2026-01-24T00:25:35.312280704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 718.753585ms" Jan 24 00:25:35.312395 containerd[1460]: time="2026-01-24T00:25:35.312355264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:25:35.321387 containerd[1460]: time="2026-01-24T00:25:35.321198718Z" level=info msg="CreateContainer within sandbox \"73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:25:35.357639 containerd[1460]: time="2026-01-24T00:25:35.357418398Z" level=info msg="CreateContainer within sandbox \"73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078\"" Jan 24 00:25:35.358601 containerd[1460]: time="2026-01-24T00:25:35.358493105Z" level=info msg="StartContainer for \"6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078\"" Jan 24 00:25:35.421921 systemd[1]: Started cri-containerd-6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078.scope - libcontainer container 6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078. Jan 24 00:25:35.509427 containerd[1460]: time="2026-01-24T00:25:35.508487325Z" level=info msg="StartContainer for \"6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078\" returns successfully" Jan 24 00:25:35.535853 systemd[1]: cri-containerd-6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078.scope: Deactivated successfully. Jan 24 00:25:35.583402 kubelet[2571]: E0124 00:25:35.582413 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:35.594411 kubelet[2571]: E0124 00:25:35.594313 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:35.624509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078-rootfs.mount: Deactivated successfully. Jan 24 00:25:35.725005 containerd[1460]: time="2026-01-24T00:25:35.721251175Z" level=info msg="shim disconnected" id=6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078 namespace=k8s.io Jan 24 00:25:35.725005 containerd[1460]: time="2026-01-24T00:25:35.724966843Z" level=warning msg="cleaning up after shim disconnected" id=6ea29bc0257cd36e2fa956055fa30153b2a560f9c23e822aa6b5a24ae1673078 namespace=k8s.io Jan 24 00:25:35.725005 containerd[1460]: time="2026-01-24T00:25:35.724986270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:25:35.776365 containerd[1460]: time="2026-01-24T00:25:35.776206589Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:25:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:25:36.356014 kubelet[2571]: E0124 00:25:36.355849 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:36.595768 kubelet[2571]: E0124 00:25:36.595529 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:36.596287 kubelet[2571]: I0124 00:25:36.596234 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:25:36.596983 containerd[1460]: time="2026-01-24T00:25:36.596427961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:25:36.597495 kubelet[2571]: E0124 00:25:36.597396 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:36.616678 kubelet[2571]: I0124 00:25:36.615926 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56cc598847-l7lr2" podStartSLOduration=2.618906937 podStartE2EDuration="4.615902218s" podCreationTimestamp="2026-01-24 00:25:32 +0000 UTC" firstStartedPulling="2026-01-24 00:25:32.596183542 +0000 UTC m=+18.436717183" lastFinishedPulling="2026-01-24 00:25:34.593178823 +0000 UTC m=+20.433712464" observedRunningTime="2026-01-24 00:25:35.682447992 +0000 UTC m=+21.522981664" watchObservedRunningTime="2026-01-24 00:25:36.615902218 +0000 UTC m=+22.456435859" Jan 24 00:25:37.609654 kubelet[2571]: E0124 00:25:37.607857 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:38.339491 kubelet[2571]: E0124 00:25:38.339289 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:38.602248 kubelet[2571]: E0124 00:25:38.601727 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:40.200632 containerd[1460]: time="2026-01-24T00:25:40.200495937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:40.203135 containerd[1460]: time="2026-01-24T00:25:40.203053803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:25:40.208238 containerd[1460]: time="2026-01-24T00:25:40.206545375Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:40.212343 containerd[1460]: time="2026-01-24T00:25:40.212262144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:40.213992 containerd[1460]: time="2026-01-24T00:25:40.213771493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.617304739s" Jan 24 00:25:40.213992 containerd[1460]: time="2026-01-24T00:25:40.213819371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:25:40.223509 containerd[1460]: time="2026-01-24T00:25:40.223372315Z" level=info msg="CreateContainer within sandbox \"73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:25:40.263236 containerd[1460]: time="2026-01-24T00:25:40.263051895Z" level=info msg="CreateContainer within sandbox \"73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898\"" Jan 24 00:25:40.264803 containerd[1460]: time="2026-01-24T00:25:40.264382671Z" level=info msg="StartContainer for \"62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898\"" Jan 24 00:25:40.334038 systemd[1]: Started cri-containerd-62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898.scope - libcontainer container 62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898. Jan 24 00:25:40.338959 kubelet[2571]: E0124 00:25:40.338771 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:40.400451 containerd[1460]: time="2026-01-24T00:25:40.400293704Z" level=info msg="StartContainer for \"62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898\" returns successfully" Jan 24 00:25:40.614820 kubelet[2571]: E0124 00:25:40.614722 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:41.613771 systemd[1]: cri-containerd-62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898.scope: Deactivated successfully. Jan 24 00:25:41.615537 systemd[1]: cri-containerd-62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898.scope: Consumed 1.604s CPU time. Jan 24 00:25:41.617187 kubelet[2571]: E0124 00:25:41.617033 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:41.667708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898-rootfs.mount: Deactivated successfully. Jan 24 00:25:41.674213 kubelet[2571]: I0124 00:25:41.672829 2571 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:25:41.804047 containerd[1460]: time="2026-01-24T00:25:41.802869481Z" level=info msg="shim disconnected" id=62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898 namespace=k8s.io Jan 24 00:25:41.804047 containerd[1460]: time="2026-01-24T00:25:41.802924704Z" level=warning msg="cleaning up after shim disconnected" id=62a8decc9e24426bccf62dd563fb38d00887858f472082a109c10f01349c3898 namespace=k8s.io Jan 24 00:25:41.804047 containerd[1460]: time="2026-01-24T00:25:41.802934051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:25:41.811182 kubelet[2571]: I0124 00:25:41.811051 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjnnp\" (UniqueName: \"kubernetes.io/projected/0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0-kube-api-access-gjnnp\") pod \"coredns-674b8bbfcf-2vn8x\" (UID: \"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0\") " pod="kube-system/coredns-674b8bbfcf-2vn8x" Jan 24 00:25:41.811392 kubelet[2571]: I0124 00:25:41.811191 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0-config-volume\") pod \"coredns-674b8bbfcf-2vn8x\" (UID: \"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0\") " pod="kube-system/coredns-674b8bbfcf-2vn8x" Jan 24 00:25:41.811392 kubelet[2571]: I0124 00:25:41.811229 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d228b\" (UniqueName: \"kubernetes.io/projected/516a3626-ef38-4d36-84e3-1a27e671269b-kube-api-access-d228b\") pod \"coredns-674b8bbfcf-xmh9g\" (UID: \"516a3626-ef38-4d36-84e3-1a27e671269b\") " pod="kube-system/coredns-674b8bbfcf-xmh9g" Jan 24 00:25:41.811392 kubelet[2571]: I0124 00:25:41.811265 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/516a3626-ef38-4d36-84e3-1a27e671269b-config-volume\") pod \"coredns-674b8bbfcf-xmh9g\" (UID: \"516a3626-ef38-4d36-84e3-1a27e671269b\") " pod="kube-system/coredns-674b8bbfcf-xmh9g" Jan 24 00:25:41.814741 systemd[1]: Created slice kubepods-burstable-pod516a3626_ef38_4d36_84e3_1a27e671269b.slice - libcontainer container kubepods-burstable-pod516a3626_ef38_4d36_84e3_1a27e671269b.slice. Jan 24 00:25:41.878238 systemd[1]: Created slice kubepods-burstable-pod0e5f8f70_b739_49dd_97ec_b14f3f8b9ba0.slice - libcontainer container kubepods-burstable-pod0e5f8f70_b739_49dd_97ec_b14f3f8b9ba0.slice. Jan 24 00:25:41.907128 systemd[1]: Created slice kubepods-besteffort-podc9124158_0f90_4bb6_8fd8_7f63bd272b78.slice - libcontainer container kubepods-besteffort-podc9124158_0f90_4bb6_8fd8_7f63bd272b78.slice. Jan 24 00:25:41.911685 kubelet[2571]: I0124 00:25:41.911466 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95qkk\" (UniqueName: \"kubernetes.io/projected/3c957499-b83a-4ee9-8faf-8cc8bcb63fe3-kube-api-access-95qkk\") pod \"goldmane-666569f655-jfbl5\" (UID: \"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3\") " pod="calico-system/goldmane-666569f655-jfbl5" Jan 24 00:25:41.913721 kubelet[2571]: I0124 00:25:41.912811 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-524nm\" (UniqueName: \"kubernetes.io/projected/2b7e3139-1ac0-464d-91ba-3ef9871bf348-kube-api-access-524nm\") pod \"calico-kube-controllers-7f664d4f9c-5l5qb\" (UID: \"2b7e3139-1ac0-464d-91ba-3ef9871bf348\") " pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" Jan 24 00:25:41.913721 kubelet[2571]: I0124 00:25:41.912865 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9124158-0f90-4bb6-8fd8-7f63bd272b78-calico-apiserver-certs\") pod \"calico-apiserver-ff5668969-4kf4b\" (UID: \"c9124158-0f90-4bb6-8fd8-7f63bd272b78\") " pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" Jan 24 00:25:41.913721 kubelet[2571]: I0124 00:25:41.912891 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40ee8f0f-9c75-4f11-bb2e-9eb000639316-calico-apiserver-certs\") pod \"calico-apiserver-ff5668969-4dlrd\" (UID: \"40ee8f0f-9c75-4f11-bb2e-9eb000639316\") " pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" Jan 24 00:25:41.913721 kubelet[2571]: I0124 00:25:41.912940 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c957499-b83a-4ee9-8faf-8cc8bcb63fe3-goldmane-ca-bundle\") pod \"goldmane-666569f655-jfbl5\" (UID: \"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3\") " pod="calico-system/goldmane-666569f655-jfbl5" Jan 24 00:25:41.913721 kubelet[2571]: I0124 00:25:41.912982 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c957499-b83a-4ee9-8faf-8cc8bcb63fe3-config\") pod \"goldmane-666569f655-jfbl5\" (UID: \"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3\") " pod="calico-system/goldmane-666569f655-jfbl5" Jan 24 00:25:41.913984 kubelet[2571]: I0124 00:25:41.913007 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-backend-key-pair\") pod \"whisker-5dd9f46c89-fjr7r\" (UID: \"9a65b350-997e-4465-9b6c-0f4736529b01\") " pod="calico-system/whisker-5dd9f46c89-fjr7r" Jan 24 00:25:41.913984 kubelet[2571]: I0124 00:25:41.913051 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ltmd\" (UniqueName: \"kubernetes.io/projected/9a65b350-997e-4465-9b6c-0f4736529b01-kube-api-access-2ltmd\") pod \"whisker-5dd9f46c89-fjr7r\" (UID: \"9a65b350-997e-4465-9b6c-0f4736529b01\") " pod="calico-system/whisker-5dd9f46c89-fjr7r" Jan 24 00:25:41.913984 kubelet[2571]: I0124 00:25:41.913139 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9p2\" (UniqueName: \"kubernetes.io/projected/c9124158-0f90-4bb6-8fd8-7f63bd272b78-kube-api-access-4t9p2\") pod \"calico-apiserver-ff5668969-4kf4b\" (UID: \"c9124158-0f90-4bb6-8fd8-7f63bd272b78\") " pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" Jan 24 00:25:41.913984 kubelet[2571]: I0124 00:25:41.913172 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jz9t\" (UniqueName: \"kubernetes.io/projected/40ee8f0f-9c75-4f11-bb2e-9eb000639316-kube-api-access-2jz9t\") pod \"calico-apiserver-ff5668969-4dlrd\" (UID: \"40ee8f0f-9c75-4f11-bb2e-9eb000639316\") " pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" Jan 24 00:25:41.913984 kubelet[2571]: I0124 00:25:41.913200 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3c957499-b83a-4ee9-8faf-8cc8bcb63fe3-goldmane-key-pair\") pod \"goldmane-666569f655-jfbl5\" (UID: \"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3\") " pod="calico-system/goldmane-666569f655-jfbl5" Jan 24 00:25:41.914306 kubelet[2571]: I0124 00:25:41.913224 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-ca-bundle\") pod \"whisker-5dd9f46c89-fjr7r\" (UID: \"9a65b350-997e-4465-9b6c-0f4736529b01\") " pod="calico-system/whisker-5dd9f46c89-fjr7r" Jan 24 00:25:41.914306 kubelet[2571]: I0124 00:25:41.913247 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b7e3139-1ac0-464d-91ba-3ef9871bf348-tigera-ca-bundle\") pod \"calico-kube-controllers-7f664d4f9c-5l5qb\" (UID: \"2b7e3139-1ac0-464d-91ba-3ef9871bf348\") " pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" Jan 24 00:25:41.920699 systemd[1]: Created slice kubepods-besteffort-pod9a65b350_997e_4465_9b6c_0f4736529b01.slice - libcontainer container kubepods-besteffort-pod9a65b350_997e_4465_9b6c_0f4736529b01.slice. Jan 24 00:25:41.965463 systemd[1]: Created slice kubepods-besteffort-pod40ee8f0f_9c75_4f11_bb2e_9eb000639316.slice - libcontainer container kubepods-besteffort-pod40ee8f0f_9c75_4f11_bb2e_9eb000639316.slice. Jan 24 00:25:42.002112 systemd[1]: Created slice kubepods-besteffort-pod2b7e3139_1ac0_464d_91ba_3ef9871bf348.slice - libcontainer container kubepods-besteffort-pod2b7e3139_1ac0_464d_91ba_3ef9871bf348.slice. Jan 24 00:25:42.013942 systemd[1]: Created slice kubepods-besteffort-pod3c957499_b83a_4ee9_8faf_8cc8bcb63fe3.slice - libcontainer container kubepods-besteffort-pod3c957499_b83a_4ee9_8faf_8cc8bcb63fe3.slice. Jan 24 00:25:42.133750 kubelet[2571]: E0124 00:25:42.133310 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:42.135267 containerd[1460]: time="2026-01-24T00:25:42.134984139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmh9g,Uid:516a3626-ef38-4d36-84e3-1a27e671269b,Namespace:kube-system,Attempt:0,}" Jan 24 00:25:42.199201 kubelet[2571]: E0124 00:25:42.198673 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:42.200179 containerd[1460]: time="2026-01-24T00:25:42.199696699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vn8x,Uid:0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0,Namespace:kube-system,Attempt:0,}" Jan 24 00:25:42.214298 containerd[1460]: time="2026-01-24T00:25:42.214199492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4kf4b,Uid:c9124158-0f90-4bb6-8fd8-7f63bd272b78,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:25:42.239679 containerd[1460]: time="2026-01-24T00:25:42.238809726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd9f46c89-fjr7r,Uid:9a65b350-997e-4465-9b6c-0f4736529b01,Namespace:calico-system,Attempt:0,}" Jan 24 00:25:42.296233 containerd[1460]: time="2026-01-24T00:25:42.296119760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4dlrd,Uid:40ee8f0f-9c75-4f11-bb2e-9eb000639316,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:25:42.313131 containerd[1460]: time="2026-01-24T00:25:42.311927392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f664d4f9c-5l5qb,Uid:2b7e3139-1ac0-464d-91ba-3ef9871bf348,Namespace:calico-system,Attempt:0,}" Jan 24 00:25:42.320181 containerd[1460]: time="2026-01-24T00:25:42.320141588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfbl5,Uid:3c957499-b83a-4ee9-8faf-8cc8bcb63fe3,Namespace:calico-system,Attempt:0,}" Jan 24 00:25:42.363200 systemd[1]: Created slice kubepods-besteffort-pod677a3c6a_a428_4746_be4d_2080a36b4930.slice - libcontainer container kubepods-besteffort-pod677a3c6a_a428_4746_be4d_2080a36b4930.slice. Jan 24 00:25:42.372959 containerd[1460]: time="2026-01-24T00:25:42.372788605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grfd7,Uid:677a3c6a-a428-4746-be4d-2080a36b4930,Namespace:calico-system,Attempt:0,}" Jan 24 00:25:42.595703 containerd[1460]: time="2026-01-24T00:25:42.595483799Z" level=error msg="Failed to destroy network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.606771 containerd[1460]: time="2026-01-24T00:25:42.606438251Z" level=error msg="encountered an error cleaning up failed sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.608879 containerd[1460]: time="2026-01-24T00:25:42.608742169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmh9g,Uid:516a3626-ef38-4d36-84e3-1a27e671269b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.609437 kubelet[2571]: E0124 00:25:42.609335 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.609765 kubelet[2571]: E0124 00:25:42.609491 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xmh9g" Jan 24 00:25:42.609765 kubelet[2571]: E0124 00:25:42.609686 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xmh9g" Jan 24 00:25:42.609869 kubelet[2571]: E0124 00:25:42.609777 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xmh9g_kube-system(516a3626-ef38-4d36-84e3-1a27e671269b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xmh9g_kube-system(516a3626-ef38-4d36-84e3-1a27e671269b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xmh9g" podUID="516a3626-ef38-4d36-84e3-1a27e671269b" Jan 24 00:25:42.625315 kubelet[2571]: E0124 00:25:42.625226 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:42.630967 containerd[1460]: time="2026-01-24T00:25:42.630742242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:25:42.644536 kubelet[2571]: I0124 00:25:42.643860 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:25:42.683022 containerd[1460]: time="2026-01-24T00:25:42.682911078Z" level=info msg="StopPodSandbox for \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\"" Jan 24 00:25:42.687889 containerd[1460]: time="2026-01-24T00:25:42.687777373Z" level=info msg="Ensure that sandbox 326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5 in task-service has been cleanup successfully" Jan 24 00:25:42.712379 containerd[1460]: time="2026-01-24T00:25:42.712239999Z" level=error msg="Failed to destroy network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.717897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225-shm.mount: Deactivated successfully. Jan 24 00:25:42.721698 containerd[1460]: time="2026-01-24T00:25:42.720144472Z" level=error msg="encountered an error cleaning up failed sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.728367 containerd[1460]: time="2026-01-24T00:25:42.728324293Z" level=error msg="Failed to destroy network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.733390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f-shm.mount: Deactivated successfully. Jan 24 00:25:42.819163 containerd[1460]: time="2026-01-24T00:25:42.729221096Z" level=error msg="encountered an error cleaning up failed sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.820026 containerd[1460]: time="2026-01-24T00:25:42.819977553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4kf4b,Uid:c9124158-0f90-4bb6-8fd8-7f63bd272b78,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.820340 containerd[1460]: time="2026-01-24T00:25:42.816274232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vn8x,Uid:0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.820867 kubelet[2571]: E0124 00:25:42.820819 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.821040 kubelet[2571]: E0124 00:25:42.821006 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" Jan 24 00:25:42.821213 kubelet[2571]: E0124 00:25:42.821186 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" Jan 24 00:25:42.821422 kubelet[2571]: E0124 00:25:42.821379 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff5668969-4kf4b_calico-apiserver(c9124158-0f90-4bb6-8fd8-7f63bd272b78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff5668969-4kf4b_calico-apiserver(c9124158-0f90-4bb6-8fd8-7f63bd272b78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:25:42.821697 kubelet[2571]: E0124 00:25:42.820929 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.821814 kubelet[2571]: E0124 00:25:42.821790 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2vn8x" Jan 24 00:25:42.821903 kubelet[2571]: E0124 00:25:42.821879 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2vn8x" Jan 24 00:25:42.822040 kubelet[2571]: E0124 00:25:42.822006 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2vn8x_kube-system(0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2vn8x_kube-system(0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2vn8x" podUID="0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0" Jan 24 00:25:42.838532 containerd[1460]: time="2026-01-24T00:25:42.838413044Z" level=error msg="Failed to destroy network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.844990 containerd[1460]: time="2026-01-24T00:25:42.841325076Z" level=error msg="encountered an error cleaning up failed sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.844990 containerd[1460]: time="2026-01-24T00:25:42.841406707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grfd7,Uid:677a3c6a-a428-4746-be4d-2080a36b4930,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.842835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2-shm.mount: Deactivated successfully. Jan 24 00:25:42.845397 kubelet[2571]: E0124 00:25:42.842181 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.845397 kubelet[2571]: E0124 00:25:42.842264 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-grfd7" Jan 24 00:25:42.845397 kubelet[2571]: E0124 00:25:42.842299 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-grfd7" Jan 24 00:25:42.845534 kubelet[2571]: E0124 00:25:42.842478 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:42.876912 containerd[1460]: time="2026-01-24T00:25:42.853874319Z" level=error msg="Failed to destroy network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.883249 containerd[1460]: time="2026-01-24T00:25:42.882369647Z" level=error msg="encountered an error cleaning up failed sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.883249 containerd[1460]: time="2026-01-24T00:25:42.882446830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd9f46c89-fjr7r,Uid:9a65b350-997e-4465-9b6c-0f4736529b01,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.885326 kubelet[2571]: E0124 00:25:42.882756 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.885326 kubelet[2571]: E0124 00:25:42.882814 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd9f46c89-fjr7r" Jan 24 00:25:42.885326 kubelet[2571]: E0124 00:25:42.882836 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd9f46c89-fjr7r" Jan 24 00:25:42.884049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516-shm.mount: Deactivated successfully. Jan 24 00:25:42.885664 kubelet[2571]: E0124 00:25:42.882882 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dd9f46c89-fjr7r_calico-system(9a65b350-997e-4465-9b6c-0f4736529b01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dd9f46c89-fjr7r_calico-system(9a65b350-997e-4465-9b6c-0f4736529b01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd9f46c89-fjr7r" podUID="9a65b350-997e-4465-9b6c-0f4736529b01" Jan 24 00:25:42.895191 containerd[1460]: time="2026-01-24T00:25:42.895127569Z" level=error msg="Failed to destroy network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.899019 containerd[1460]: time="2026-01-24T00:25:42.898688766Z" level=error msg="encountered an error cleaning up failed sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.899019 containerd[1460]: time="2026-01-24T00:25:42.898780657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4dlrd,Uid:40ee8f0f-9c75-4f11-bb2e-9eb000639316,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.899473 kubelet[2571]: E0124 00:25:42.899371 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.899940 kubelet[2571]: E0124 00:25:42.899495 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" Jan 24 00:25:42.899940 kubelet[2571]: E0124 00:25:42.899533 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" Jan 24 00:25:42.899940 kubelet[2571]: E0124 00:25:42.899684 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff5668969-4dlrd_calico-apiserver(40ee8f0f-9c75-4f11-bb2e-9eb000639316)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff5668969-4dlrd_calico-apiserver(40ee8f0f-9c75-4f11-bb2e-9eb000639316)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:25:42.912176 containerd[1460]: time="2026-01-24T00:25:42.912009199Z" level=error msg="StopPodSandbox for \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\" failed" error="failed to destroy network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.912618 kubelet[2571]: E0124 00:25:42.912465 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:25:42.914318 kubelet[2571]: E0124 00:25:42.913853 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5"} Jan 24 00:25:42.914543 kubelet[2571]: E0124 00:25:42.914370 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"516a3626-ef38-4d36-84e3-1a27e671269b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:42.914543 kubelet[2571]: E0124 00:25:42.914514 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"516a3626-ef38-4d36-84e3-1a27e671269b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xmh9g" podUID="516a3626-ef38-4d36-84e3-1a27e671269b" Jan 24 00:25:42.925368 containerd[1460]: time="2026-01-24T00:25:42.924834826Z" level=error msg="Failed to destroy network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.926325 containerd[1460]: time="2026-01-24T00:25:42.926263423Z" level=error msg="encountered an error cleaning up failed sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.926510 containerd[1460]: time="2026-01-24T00:25:42.926354472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f664d4f9c-5l5qb,Uid:2b7e3139-1ac0-464d-91ba-3ef9871bf348,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.927097 kubelet[2571]: E0124 00:25:42.926840 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.927234 kubelet[2571]: E0124 00:25:42.927130 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" Jan 24 00:25:42.927234 kubelet[2571]: E0124 00:25:42.927166 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" Jan 24 00:25:42.927293 kubelet[2571]: E0124 00:25:42.927252 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f664d4f9c-5l5qb_calico-system(2b7e3139-1ac0-464d-91ba-3ef9871bf348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f664d4f9c-5l5qb_calico-system(2b7e3139-1ac0-464d-91ba-3ef9871bf348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:25:42.930498 containerd[1460]: time="2026-01-24T00:25:42.930161277Z" level=error msg="Failed to destroy network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.931364 containerd[1460]: time="2026-01-24T00:25:42.931110555Z" level=error msg="encountered an error cleaning up failed sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.931364 containerd[1460]: time="2026-01-24T00:25:42.931238743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfbl5,Uid:3c957499-b83a-4ee9-8faf-8cc8bcb63fe3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.931806 kubelet[2571]: E0124 00:25:42.931714 2571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:42.931931 kubelet[2571]: E0124 00:25:42.931783 2571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jfbl5" Jan 24 00:25:42.931931 kubelet[2571]: E0124 00:25:42.931925 2571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jfbl5" Jan 24 00:25:42.932038 kubelet[2571]: E0124 00:25:42.931980 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jfbl5_calico-system(3c957499-b83a-4ee9-8faf-8cc8bcb63fe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jfbl5_calico-system(3c957499-b83a-4ee9-8faf-8cc8bcb63fe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:25:43.678953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5-shm.mount: Deactivated successfully. Jan 24 00:25:43.679234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02-shm.mount: Deactivated successfully. Jan 24 00:25:43.679410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21-shm.mount: Deactivated successfully. Jan 24 00:25:43.685734 kubelet[2571]: I0124 00:25:43.685683 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:25:43.689132 containerd[1460]: time="2026-01-24T00:25:43.688360788Z" level=info msg="StopPodSandbox for \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\"" Jan 24 00:25:43.689132 containerd[1460]: time="2026-01-24T00:25:43.688691921Z" level=info msg="Ensure that sandbox 5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02 in task-service has been cleanup successfully" Jan 24 00:25:43.697421 kubelet[2571]: I0124 00:25:43.696454 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:25:43.700294 containerd[1460]: time="2026-01-24T00:25:43.699830090Z" level=info msg="StopPodSandbox for \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\"" Jan 24 00:25:43.700437 containerd[1460]: time="2026-01-24T00:25:43.700382493Z" level=info msg="Ensure that sandbox 825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f in task-service has been cleanup successfully" Jan 24 00:25:43.703398 kubelet[2571]: I0124 00:25:43.703295 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:25:43.705385 containerd[1460]: time="2026-01-24T00:25:43.705268416Z" level=info msg="StopPodSandbox for \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\"" Jan 24 00:25:43.705685 containerd[1460]: time="2026-01-24T00:25:43.705537585Z" level=info msg="Ensure that sandbox f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21 in task-service has been cleanup successfully" Jan 24 00:25:43.722647 kubelet[2571]: I0124 00:25:43.718544 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:25:43.726661 containerd[1460]: time="2026-01-24T00:25:43.723195582Z" level=info msg="StopPodSandbox for \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\"" Jan 24 00:25:43.726661 containerd[1460]: time="2026-01-24T00:25:43.723483515Z" level=info msg="Ensure that sandbox 9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516 in task-service has been cleanup successfully" Jan 24 00:25:43.737926 kubelet[2571]: I0124 00:25:43.736537 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:25:43.740895 containerd[1460]: time="2026-01-24T00:25:43.740299691Z" level=info msg="StopPodSandbox for \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\"" Jan 24 00:25:43.746737 kubelet[2571]: I0124 00:25:43.746696 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:25:43.753341 containerd[1460]: time="2026-01-24T00:25:43.748472122Z" level=info msg="StopPodSandbox for \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\"" Jan 24 00:25:43.753816 containerd[1460]: time="2026-01-24T00:25:43.753734762Z" level=info msg="Ensure that sandbox 8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225 in task-service has been cleanup successfully" Jan 24 00:25:43.756325 containerd[1460]: time="2026-01-24T00:25:43.756224225Z" level=info msg="Ensure that sandbox e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2 in task-service has been cleanup successfully" Jan 24 00:25:43.770481 kubelet[2571]: I0124 00:25:43.770379 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:25:43.771441 containerd[1460]: time="2026-01-24T00:25:43.771339973Z" level=info msg="StopPodSandbox for \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\"" Jan 24 00:25:43.771944 containerd[1460]: time="2026-01-24T00:25:43.771835140Z" level=info msg="Ensure that sandbox e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5 in task-service has been cleanup successfully" Jan 24 00:25:43.911023 containerd[1460]: time="2026-01-24T00:25:43.910952426Z" level=error msg="StopPodSandbox for \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\" failed" error="failed to destroy network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:43.912109 kubelet[2571]: E0124 00:25:43.912018 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:25:43.912811 kubelet[2571]: E0124 00:25:43.912683 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225"} Jan 24 00:25:43.913185 kubelet[2571]: E0124 00:25:43.913153 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:43.913546 kubelet[2571]: E0124 00:25:43.913511 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2vn8x" podUID="0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0" Jan 24 00:25:43.916943 containerd[1460]: time="2026-01-24T00:25:43.916838760Z" level=error msg="StopPodSandbox for \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\" failed" error="failed to destroy network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:43.917289 kubelet[2571]: E0124 00:25:43.917165 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:25:43.917289 kubelet[2571]: E0124 00:25:43.917264 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02"} Jan 24 00:25:43.917488 kubelet[2571]: E0124 00:25:43.917307 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:43.917488 kubelet[2571]: E0124 00:25:43.917339 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:25:43.936943 containerd[1460]: time="2026-01-24T00:25:43.935095697Z" level=error msg="StopPodSandbox for \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\" failed" error="failed to destroy network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:43.937126 kubelet[2571]: E0124 00:25:43.935347 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:25:43.937126 kubelet[2571]: E0124 00:25:43.935819 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516"} Jan 24 00:25:43.937126 kubelet[2571]: E0124 00:25:43.935877 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a65b350-997e-4465-9b6c-0f4736529b01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:43.937126 kubelet[2571]: E0124 00:25:43.935908 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a65b350-997e-4465-9b6c-0f4736529b01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd9f46c89-fjr7r" podUID="9a65b350-997e-4465-9b6c-0f4736529b01" Jan 24 00:25:43.941087 containerd[1460]: time="2026-01-24T00:25:43.940948856Z" level=error msg="StopPodSandbox for \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\" failed" error="failed to destroy network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:43.941495 kubelet[2571]: E0124 00:25:43.941397 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:25:43.941679 kubelet[2571]: E0124 00:25:43.941505 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f"} Jan 24 00:25:43.941679 kubelet[2571]: E0124 00:25:43.941553 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9124158-0f90-4bb6-8fd8-7f63bd272b78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:43.941868 kubelet[2571]: E0124 00:25:43.941681 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9124158-0f90-4bb6-8fd8-7f63bd272b78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:25:43.946190 containerd[1460]: time="2026-01-24T00:25:43.945953549Z" level=error msg="StopPodSandbox for \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\" failed" error="failed to destroy network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:43.947241 containerd[1460]: time="2026-01-24T00:25:43.946895222Z" level=error msg="StopPodSandbox for \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\" failed" error="failed to destroy network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:43.947318 kubelet[2571]: E0124 00:25:43.946944 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:25:43.947318 kubelet[2571]: E0124 00:25:43.947002 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5"} Jan 24 00:25:43.947318 kubelet[2571]: E0124 00:25:43.947100 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b7e3139-1ac0-464d-91ba-3ef9871bf348\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:43.947318 kubelet[2571]: E0124 00:25:43.947159 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b7e3139-1ac0-464d-91ba-3ef9871bf348\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:25:43.947702 kubelet[2571]: E0124 00:25:43.947289 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:25:43.947702 kubelet[2571]: E0124 00:25:43.947323 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21"} Jan 24 00:25:43.947702 kubelet[2571]: E0124 00:25:43.947351 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40ee8f0f-9c75-4f11-bb2e-9eb000639316\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:43.947702 kubelet[2571]: E0124 00:25:43.947386 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40ee8f0f-9c75-4f11-bb2e-9eb000639316\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:25:43.957463 containerd[1460]: time="2026-01-24T00:25:43.957316074Z" level=error msg="StopPodSandbox for \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\" failed" error="failed to destroy network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:25:43.958178 kubelet[2571]: E0124 00:25:43.958037 2571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:25:43.958345 kubelet[2571]: E0124 00:25:43.958175 2571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2"} Jan 24 00:25:43.958345 kubelet[2571]: E0124 00:25:43.958219 2571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"677a3c6a-a428-4746-be4d-2080a36b4930\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:25:43.958345 kubelet[2571]: E0124 00:25:43.958251 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"677a3c6a-a428-4746-be4d-2080a36b4930\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:49.398312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189587994.mount: Deactivated successfully. Jan 24 00:25:49.633640 containerd[1460]: time="2026-01-24T00:25:49.633459643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:49.634662 containerd[1460]: time="2026-01-24T00:25:49.634547094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:25:49.635968 containerd[1460]: time="2026-01-24T00:25:49.635867489Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:49.639393 containerd[1460]: time="2026-01-24T00:25:49.639275851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:25:49.640797 containerd[1460]: time="2026-01-24T00:25:49.640720333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.009883486s" Jan 24 00:25:49.640855 containerd[1460]: time="2026-01-24T00:25:49.640810200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:25:49.680252 containerd[1460]: time="2026-01-24T00:25:49.679646446Z" level=info msg="CreateContainer within sandbox \"73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:25:49.710428 containerd[1460]: time="2026-01-24T00:25:49.710088097Z" level=info msg="CreateContainer within sandbox \"73855e2c039f3e034afc1555cd4c4ef975883774ae630489f2946172b2561d1b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ed3bc4037a8b895e9db0cb479d660a91ff3ca784e01162cb998eae12c65a745e\"" Jan 24 00:25:49.711661 containerd[1460]: time="2026-01-24T00:25:49.711543297Z" level=info msg="StartContainer for \"ed3bc4037a8b895e9db0cb479d660a91ff3ca784e01162cb998eae12c65a745e\"" Jan 24 00:25:49.792089 systemd[1]: Started cri-containerd-ed3bc4037a8b895e9db0cb479d660a91ff3ca784e01162cb998eae12c65a745e.scope - libcontainer container ed3bc4037a8b895e9db0cb479d660a91ff3ca784e01162cb998eae12c65a745e. Jan 24 00:25:49.853971 containerd[1460]: time="2026-01-24T00:25:49.853784200Z" level=info msg="StartContainer for \"ed3bc4037a8b895e9db0cb479d660a91ff3ca784e01162cb998eae12c65a745e\" returns successfully" Jan 24 00:25:50.003826 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:25:50.006176 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:25:50.126669 containerd[1460]: time="2026-01-24T00:25:50.126489897Z" level=info msg="StopPodSandbox for \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\"" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.264 [INFO][3796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.264 [INFO][3796] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" iface="eth0" netns="/var/run/netns/cni-3783a87a-8c5f-b371-1b2d-dcd1b603b069" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.266 [INFO][3796] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" iface="eth0" netns="/var/run/netns/cni-3783a87a-8c5f-b371-1b2d-dcd1b603b069" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.269 [INFO][3796] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" iface="eth0" netns="/var/run/netns/cni-3783a87a-8c5f-b371-1b2d-dcd1b603b069" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.269 [INFO][3796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.269 [INFO][3796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.402 [INFO][3812] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.409 [INFO][3812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.409 [INFO][3812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.420 [WARNING][3812] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.420 [INFO][3812] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.425 [INFO][3812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:50.433243 containerd[1460]: 2026-01-24 00:25:50.429 [INFO][3796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:25:50.433805 containerd[1460]: time="2026-01-24T00:25:50.433500640Z" level=info msg="TearDown network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\" successfully" Jan 24 00:25:50.433805 containerd[1460]: time="2026-01-24T00:25:50.433533251Z" level=info msg="StopPodSandbox for \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\" returns successfully" Jan 24 00:25:50.439704 systemd[1]: run-netns-cni\x2d3783a87a\x2d8c5f\x2db371\x2d1b2d\x2ddcd1b603b069.mount: Deactivated successfully. Jan 24 00:25:50.548282 kubelet[2571]: I0124 00:25:50.547994 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-backend-key-pair\") pod \"9a65b350-997e-4465-9b6c-0f4736529b01\" (UID: \"9a65b350-997e-4465-9b6c-0f4736529b01\") " Jan 24 00:25:50.548282 kubelet[2571]: I0124 00:25:50.548151 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ltmd\" (UniqueName: \"kubernetes.io/projected/9a65b350-997e-4465-9b6c-0f4736529b01-kube-api-access-2ltmd\") pod \"9a65b350-997e-4465-9b6c-0f4736529b01\" (UID: \"9a65b350-997e-4465-9b6c-0f4736529b01\") " Jan 24 00:25:50.548282 kubelet[2571]: I0124 00:25:50.548189 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-ca-bundle\") pod \"9a65b350-997e-4465-9b6c-0f4736529b01\" (UID: \"9a65b350-997e-4465-9b6c-0f4736529b01\") " Jan 24 00:25:50.548954 kubelet[2571]: I0124 00:25:50.548907 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a65b350-997e-4465-9b6c-0f4736529b01" (UID: "9a65b350-997e-4465-9b6c-0f4736529b01"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:25:50.556400 kubelet[2571]: I0124 00:25:50.556323 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a65b350-997e-4465-9b6c-0f4736529b01-kube-api-access-2ltmd" (OuterVolumeSpecName: "kube-api-access-2ltmd") pod "9a65b350-997e-4465-9b6c-0f4736529b01" (UID: "9a65b350-997e-4465-9b6c-0f4736529b01"). InnerVolumeSpecName "kube-api-access-2ltmd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:25:50.556519 kubelet[2571]: I0124 00:25:50.556492 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a65b350-997e-4465-9b6c-0f4736529b01" (UID: "9a65b350-997e-4465-9b6c-0f4736529b01"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:25:50.558096 systemd[1]: var-lib-kubelet-pods-9a65b350\x2d997e\x2d4465\x2d9b6c\x2d0f4736529b01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2ltmd.mount: Deactivated successfully. Jan 24 00:25:50.558238 systemd[1]: var-lib-kubelet-pods-9a65b350\x2d997e\x2d4465\x2d9b6c\x2d0f4736529b01-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:25:50.649216 kubelet[2571]: I0124 00:25:50.649114 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ltmd\" (UniqueName: \"kubernetes.io/projected/9a65b350-997e-4465-9b6c-0f4736529b01-kube-api-access-2ltmd\") on node \"localhost\" DevicePath \"\"" Jan 24 00:25:50.649216 kubelet[2571]: I0124 00:25:50.649181 2571 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:25:50.649216 kubelet[2571]: I0124 00:25:50.649200 2571 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a65b350-997e-4465-9b6c-0f4736529b01-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:25:50.812659 kubelet[2571]: E0124 00:25:50.810982 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:50.821402 systemd[1]: Removed slice kubepods-besteffort-pod9a65b350_997e_4465_9b6c_0f4736529b01.slice - libcontainer container kubepods-besteffort-pod9a65b350_997e_4465_9b6c_0f4736529b01.slice. Jan 24 00:25:50.833296 kubelet[2571]: I0124 00:25:50.833103 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d42vs" podStartSLOduration=2.031326228 podStartE2EDuration="18.833080604s" podCreationTimestamp="2026-01-24 00:25:32 +0000 UTC" firstStartedPulling="2026-01-24 00:25:32.856324924 +0000 UTC m=+18.696858564" lastFinishedPulling="2026-01-24 00:25:49.658079299 +0000 UTC m=+35.498612940" observedRunningTime="2026-01-24 00:25:50.831789434 +0000 UTC m=+36.672323116" watchObservedRunningTime="2026-01-24 00:25:50.833080604 +0000 UTC m=+36.673614245" Jan 24 00:25:50.910123 systemd[1]: Created slice kubepods-besteffort-pod824df689_3a42_4e89_bcb7_c81811fd2fd8.slice - libcontainer container kubepods-besteffort-pod824df689_3a42_4e89_bcb7_c81811fd2fd8.slice. Jan 24 00:25:50.952358 kubelet[2571]: I0124 00:25:50.952243 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/824df689-3a42-4e89-bcb7-c81811fd2fd8-whisker-backend-key-pair\") pod \"whisker-69947d5585-lnx9f\" (UID: \"824df689-3a42-4e89-bcb7-c81811fd2fd8\") " pod="calico-system/whisker-69947d5585-lnx9f" Jan 24 00:25:50.952519 kubelet[2571]: I0124 00:25:50.952415 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/824df689-3a42-4e89-bcb7-c81811fd2fd8-whisker-ca-bundle\") pod \"whisker-69947d5585-lnx9f\" (UID: \"824df689-3a42-4e89-bcb7-c81811fd2fd8\") " pod="calico-system/whisker-69947d5585-lnx9f" Jan 24 00:25:50.952519 kubelet[2571]: I0124 00:25:50.952456 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9x7l\" (UniqueName: \"kubernetes.io/projected/824df689-3a42-4e89-bcb7-c81811fd2fd8-kube-api-access-z9x7l\") pod \"whisker-69947d5585-lnx9f\" (UID: \"824df689-3a42-4e89-bcb7-c81811fd2fd8\") " pod="calico-system/whisker-69947d5585-lnx9f" Jan 24 00:25:51.216657 containerd[1460]: time="2026-01-24T00:25:51.216437123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69947d5585-lnx9f,Uid:824df689-3a42-4e89-bcb7-c81811fd2fd8,Namespace:calico-system,Attempt:0,}" Jan 24 00:25:51.488811 systemd-networkd[1371]: cali17619c8c6c0: Link UP Jan 24 00:25:51.489248 systemd-networkd[1371]: cali17619c8c6c0: Gained carrier Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.281 [INFO][3835] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.312 [INFO][3835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--69947d5585--lnx9f-eth0 whisker-69947d5585- calico-system 824df689-3a42-4e89-bcb7-c81811fd2fd8 978 0 2026-01-24 00:25:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69947d5585 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-69947d5585-lnx9f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali17619c8c6c0 [] [] }} ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.312 [INFO][3835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-eth0" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.369 [INFO][3849] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" HandleID="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Workload="localhost-k8s-whisker--69947d5585--lnx9f-eth0" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.369 [INFO][3849] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" HandleID="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Workload="localhost-k8s-whisker--69947d5585--lnx9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-69947d5585-lnx9f", "timestamp":"2026-01-24 00:25:51.369356733 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.369 [INFO][3849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.370 [INFO][3849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.370 [INFO][3849] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.380 [INFO][3849] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.395 [INFO][3849] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.401 [INFO][3849] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.405 [INFO][3849] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.411 [INFO][3849] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.412 [INFO][3849] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.417 [INFO][3849] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52 Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.428 [INFO][3849] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.441 [INFO][3849] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.441 [INFO][3849] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" host="localhost" Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.441 [INFO][3849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:51.532192 containerd[1460]: 2026-01-24 00:25:51.441 [INFO][3849] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" HandleID="k8s-pod-network.7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Workload="localhost-k8s-whisker--69947d5585--lnx9f-eth0" Jan 24 00:25:51.532918 containerd[1460]: 2026-01-24 00:25:51.458 [INFO][3835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69947d5585--lnx9f-eth0", GenerateName:"whisker-69947d5585-", Namespace:"calico-system", SelfLink:"", UID:"824df689-3a42-4e89-bcb7-c81811fd2fd8", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69947d5585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-69947d5585-lnx9f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali17619c8c6c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:51.532918 containerd[1460]: 2026-01-24 00:25:51.458 [INFO][3835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-eth0" Jan 24 00:25:51.532918 containerd[1460]: 2026-01-24 00:25:51.458 [INFO][3835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17619c8c6c0 ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-eth0" Jan 24 00:25:51.532918 containerd[1460]: 2026-01-24 00:25:51.479 [INFO][3835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-eth0" Jan 24 00:25:51.532918 containerd[1460]: 2026-01-24 00:25:51.483 [INFO][3835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69947d5585--lnx9f-eth0", GenerateName:"whisker-69947d5585-", Namespace:"calico-system", SelfLink:"", UID:"824df689-3a42-4e89-bcb7-c81811fd2fd8", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69947d5585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52", Pod:"whisker-69947d5585-lnx9f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali17619c8c6c0", MAC:"9a:a1:0f:69:9f:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:51.532918 containerd[1460]: 2026-01-24 00:25:51.523 [INFO][3835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52" Namespace="calico-system" Pod="whisker-69947d5585-lnx9f" WorkloadEndpoint="localhost-k8s-whisker--69947d5585--lnx9f-eth0" Jan 24 00:25:51.592479 containerd[1460]: time="2026-01-24T00:25:51.590982537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:51.592479 containerd[1460]: time="2026-01-24T00:25:51.591162520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:51.592479 containerd[1460]: time="2026-01-24T00:25:51.591174672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:51.592479 containerd[1460]: time="2026-01-24T00:25:51.591312248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:51.666897 systemd[1]: Started cri-containerd-7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52.scope - libcontainer container 7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52. Jan 24 00:25:51.718777 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:51.824065 containerd[1460]: time="2026-01-24T00:25:51.823771198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69947d5585-lnx9f,Uid:824df689-3a42-4e89-bcb7-c81811fd2fd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e1489bb825a22e0e1d4b804ec45960204f265e7370335a2d9811cc8eb6fce52\"" Jan 24 00:25:51.826818 containerd[1460]: time="2026-01-24T00:25:51.826438350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:25:51.961659 kernel: bpftool[4037]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:25:51.993674 containerd[1460]: time="2026-01-24T00:25:51.993482819Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:52.011347 containerd[1460]: time="2026-01-24T00:25:51.995745564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:25:52.011347 containerd[1460]: time="2026-01-24T00:25:51.996085615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:25:52.011763 kubelet[2571]: E0124 00:25:52.011552 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:25:52.012379 kubelet[2571]: E0124 00:25:52.011761 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:25:52.012431 kubelet[2571]: E0124 00:25:52.011951 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7b482aad45a047b28315ef7e942c8a90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:52.015952 containerd[1460]: time="2026-01-24T00:25:52.015528435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:25:52.094768 containerd[1460]: time="2026-01-24T00:25:52.094499420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:52.100460 containerd[1460]: time="2026-01-24T00:25:52.100310766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:25:52.100624 containerd[1460]: time="2026-01-24T00:25:52.100333795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:25:52.101077 kubelet[2571]: E0124 00:25:52.100965 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:25:52.101237 kubelet[2571]: E0124 00:25:52.101100 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:25:52.101486 kubelet[2571]: E0124 00:25:52.101312 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:52.102757 kubelet[2571]: E0124 00:25:52.102635 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:25:52.301924 systemd-networkd[1371]: vxlan.calico: Link UP Jan 24 00:25:52.301935 systemd-networkd[1371]: vxlan.calico: Gained carrier Jan 24 00:25:52.345728 kubelet[2571]: I0124 00:25:52.345161 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a65b350-997e-4465-9b6c-0f4736529b01" path="/var/lib/kubelet/pods/9a65b350-997e-4465-9b6c-0f4736529b01/volumes" Jan 24 00:25:52.822548 kubelet[2571]: E0124 00:25:52.822381 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:25:53.052886 systemd-networkd[1371]: cali17619c8c6c0: Gained IPv6LL Jan 24 00:25:53.629006 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jan 24 00:25:53.824412 kubelet[2571]: E0124 00:25:53.824131 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:25:55.338695 containerd[1460]: time="2026-01-24T00:25:55.338530899Z" level=info msg="StopPodSandbox for \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\"" Jan 24 00:25:55.339883 containerd[1460]: time="2026-01-24T00:25:55.339335864Z" level=info msg="StopPodSandbox for \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\"" Jan 24 00:25:55.339883 containerd[1460]: time="2026-01-24T00:25:55.339742995Z" level=info msg="StopPodSandbox for \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\"" Jan 24 00:25:55.341153 containerd[1460]: time="2026-01-24T00:25:55.340252836Z" level=info msg="StopPodSandbox for \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\"" Jan 24 00:25:55.341702 containerd[1460]: time="2026-01-24T00:25:55.341209292Z" level=info msg="StopPodSandbox for \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\"" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.542 [INFO][4168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.542 [INFO][4168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" iface="eth0" netns="/var/run/netns/cni-3273f16e-ce87-0a21-3799-49fdd346873d" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.543 [INFO][4168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" iface="eth0" netns="/var/run/netns/cni-3273f16e-ce87-0a21-3799-49fdd346873d" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.545 [INFO][4168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" iface="eth0" netns="/var/run/netns/cni-3273f16e-ce87-0a21-3799-49fdd346873d" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.548 [INFO][4168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.548 [INFO][4168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.623 [INFO][4222] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.623 [INFO][4222] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.624 [INFO][4222] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.633 [WARNING][4222] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.634 [INFO][4222] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.637 [INFO][4222] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:55.661787 containerd[1460]: 2026-01-24 00:25:55.650 [INFO][4168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:25:55.661787 containerd[1460]: time="2026-01-24T00:25:55.657535470Z" level=info msg="TearDown network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\" successfully" Jan 24 00:25:55.661787 containerd[1460]: time="2026-01-24T00:25:55.657724460Z" level=info msg="StopPodSandbox for \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\" returns successfully" Jan 24 00:25:55.660485 systemd[1]: run-netns-cni\x2d3273f16e\x2dce87\x2d0a21\x2d3799\x2d49fdd346873d.mount: Deactivated successfully. Jan 24 00:25:55.664308 containerd[1460]: time="2026-01-24T00:25:55.664243393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f664d4f9c-5l5qb,Uid:2b7e3139-1ac0-464d-91ba-3ef9871bf348,Namespace:calico-system,Attempt:1,}" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.541 [INFO][4170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.541 [INFO][4170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" iface="eth0" netns="/var/run/netns/cni-34fb62bc-f105-0784-796e-ffbb5c608093" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.542 [INFO][4170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" iface="eth0" netns="/var/run/netns/cni-34fb62bc-f105-0784-796e-ffbb5c608093" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.545 [INFO][4170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" iface="eth0" netns="/var/run/netns/cni-34fb62bc-f105-0784-796e-ffbb5c608093" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.545 [INFO][4170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.545 [INFO][4170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.628 [INFO][4217] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.629 [INFO][4217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.638 [INFO][4217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.650 [WARNING][4217] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.650 [INFO][4217] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.654 [INFO][4217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:55.672072 containerd[1460]: 2026-01-24 00:25:55.666 [INFO][4170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:25:55.675254 systemd[1]: run-netns-cni\x2d34fb62bc\x2df105\x2d0784\x2d796e\x2dffbb5c608093.mount: Deactivated successfully. Jan 24 00:25:55.676296 containerd[1460]: time="2026-01-24T00:25:55.676218580Z" level=info msg="TearDown network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\" successfully" Jan 24 00:25:55.676296 containerd[1460]: time="2026-01-24T00:25:55.676251081Z" level=info msg="StopPodSandbox for \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\" returns successfully" Jan 24 00:25:55.677831 kubelet[2571]: E0124 00:25:55.677761 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:55.680717 containerd[1460]: time="2026-01-24T00:25:55.680472544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmh9g,Uid:516a3626-ef38-4d36-84e3-1a27e671269b,Namespace:kube-system,Attempt:1,}" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.528 [INFO][4178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.532 [INFO][4178] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" iface="eth0" netns="/var/run/netns/cni-d82bb27b-fe9d-3724-23d1-cae164d47110" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.532 [INFO][4178] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" iface="eth0" netns="/var/run/netns/cni-d82bb27b-fe9d-3724-23d1-cae164d47110" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.533 [INFO][4178] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" iface="eth0" netns="/var/run/netns/cni-d82bb27b-fe9d-3724-23d1-cae164d47110" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.533 [INFO][4178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.533 [INFO][4178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.630 [INFO][4215] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.632 [INFO][4215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.654 [INFO][4215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.673 [WARNING][4215] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.674 [INFO][4215] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.680 [INFO][4215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:55.693507 containerd[1460]: 2026-01-24 00:25:55.688 [INFO][4178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:25:55.698420 containerd[1460]: time="2026-01-24T00:25:55.696799653Z" level=info msg="TearDown network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\" successfully" Jan 24 00:25:55.698420 containerd[1460]: time="2026-01-24T00:25:55.696844476Z" level=info msg="StopPodSandbox for \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\" returns successfully" Jan 24 00:25:55.698723 kubelet[2571]: E0124 00:25:55.698337 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:55.700403 systemd[1]: run-netns-cni\x2dd82bb27b\x2dfe9d\x2d3724\x2d23d1\x2dcae164d47110.mount: Deactivated successfully. Jan 24 00:25:55.701079 containerd[1460]: time="2026-01-24T00:25:55.700917608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vn8x,Uid:0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0,Namespace:kube-system,Attempt:1,}" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.562 [INFO][4189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.567 [INFO][4189] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" iface="eth0" netns="/var/run/netns/cni-216ef62f-7b68-c25a-fd7e-858f071e3955" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.568 [INFO][4189] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" iface="eth0" netns="/var/run/netns/cni-216ef62f-7b68-c25a-fd7e-858f071e3955" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.569 [INFO][4189] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" iface="eth0" netns="/var/run/netns/cni-216ef62f-7b68-c25a-fd7e-858f071e3955" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.569 [INFO][4189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.569 [INFO][4189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.642 [INFO][4232] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.642 [INFO][4232] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.680 [INFO][4232] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.694 [WARNING][4232] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.694 [INFO][4232] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.698 [INFO][4232] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:55.770529 containerd[1460]: 2026-01-24 00:25:55.749 [INFO][4189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:25:55.772869 containerd[1460]: time="2026-01-24T00:25:55.772524396Z" level=info msg="TearDown network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\" successfully" Jan 24 00:25:55.772869 containerd[1460]: time="2026-01-24T00:25:55.772677230Z" level=info msg="StopPodSandbox for \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\" returns successfully" Jan 24 00:25:55.780656 containerd[1460]: time="2026-01-24T00:25:55.779251435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grfd7,Uid:677a3c6a-a428-4746-be4d-2080a36b4930,Namespace:calico-system,Attempt:1,}" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.497 [INFO][4148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.500 [INFO][4148] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" iface="eth0" netns="/var/run/netns/cni-2fff943b-bc70-4677-c358-c7aa76c8fadb" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.500 [INFO][4148] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" iface="eth0" netns="/var/run/netns/cni-2fff943b-bc70-4677-c358-c7aa76c8fadb" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.502 [INFO][4148] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" iface="eth0" netns="/var/run/netns/cni-2fff943b-bc70-4677-c358-c7aa76c8fadb" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.502 [INFO][4148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.502 [INFO][4148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.647 [INFO][4207] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.648 [INFO][4207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.702 [INFO][4207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.767 [WARNING][4207] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.767 [INFO][4207] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.776 [INFO][4207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:55.788266 containerd[1460]: 2026-01-24 00:25:55.784 [INFO][4148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:25:55.790399 containerd[1460]: time="2026-01-24T00:25:55.790254594Z" level=info msg="TearDown network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\" successfully" Jan 24 00:25:55.790668 containerd[1460]: time="2026-01-24T00:25:55.790644828Z" level=info msg="StopPodSandbox for \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\" returns successfully" Jan 24 00:25:55.794073 containerd[1460]: time="2026-01-24T00:25:55.793987394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfbl5,Uid:3c957499-b83a-4ee9-8faf-8cc8bcb63fe3,Namespace:calico-system,Attempt:1,}" Jan 24 00:25:56.140766 systemd-networkd[1371]: cali9ae0c23f822: Link UP Jan 24 00:25:56.142000 systemd-networkd[1371]: cali9ae0c23f822: Gained carrier Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.856 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0 coredns-674b8bbfcf- kube-system 516a3626-ef38-4d36-84e3-1a27e671269b 1019 0 2026-01-24 00:25:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xmh9g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ae0c23f822 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.857 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.933 [INFO][4315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" HandleID="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.933 [INFO][4315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" HandleID="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000482a00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xmh9g", "timestamp":"2026-01-24 00:25:55.933074061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.934 [INFO][4315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.934 [INFO][4315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.934 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.949 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:55.977 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.041 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.061 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.081 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.081 [INFO][4315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.087 [INFO][4315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.102 [INFO][4315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.121 [INFO][4315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.121 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" host="localhost" Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.122 [INFO][4315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:56.193847 containerd[1460]: 2026-01-24 00:25:56.122 [INFO][4315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" HandleID="k8s-pod-network.a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:56.195522 containerd[1460]: 2026-01-24 00:25:56.131 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"516a3626-ef38-4d36-84e3-1a27e671269b", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xmh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae0c23f822", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.195522 containerd[1460]: 2026-01-24 00:25:56.131 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:56.195522 containerd[1460]: 2026-01-24 00:25:56.131 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ae0c23f822 ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:56.195522 containerd[1460]: 2026-01-24 00:25:56.159 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:56.195522 containerd[1460]: 2026-01-24 00:25:56.160 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"516a3626-ef38-4d36-84e3-1a27e671269b", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d", Pod:"coredns-674b8bbfcf-xmh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae0c23f822", MAC:"ee:7f:58:14:ea:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.195522 containerd[1460]: 2026-01-24 00:25:56.186 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmh9g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:25:56.242451 systemd-networkd[1371]: califfd19156448: Link UP Jan 24 00:25:56.264271 systemd-networkd[1371]: califfd19156448: Gained carrier Jan 24 00:25:56.290329 containerd[1460]: time="2026-01-24T00:25:56.289226114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:56.290329 containerd[1460]: time="2026-01-24T00:25:56.289304068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:56.290329 containerd[1460]: time="2026-01-24T00:25:56.289324636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.290329 containerd[1460]: time="2026-01-24T00:25:56.289647145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:55.897 [INFO][4267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0 coredns-674b8bbfcf- kube-system 0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0 1017 0 2026-01-24 00:25:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2vn8x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califfd19156448 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:55.897 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:55.989 [INFO][4331] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" HandleID="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:55.990 [INFO][4331] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" HandleID="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2vn8x", "timestamp":"2026-01-24 00:25:55.989502446 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:55.990 [INFO][4331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.124 [INFO][4331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.124 [INFO][4331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.141 [INFO][4331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.172 [INFO][4331] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.190 [INFO][4331] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.197 [INFO][4331] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.201 [INFO][4331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.202 [INFO][4331] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.205 [INFO][4331] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56 Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.213 [INFO][4331] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.227 [INFO][4331] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.227 [INFO][4331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" host="localhost" Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.227 [INFO][4331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:56.301531 containerd[1460]: 2026-01-24 00:25:56.227 [INFO][4331] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" HandleID="k8s-pod-network.5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:56.302542 containerd[1460]: 2026-01-24 00:25:56.235 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2vn8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfd19156448", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.302542 containerd[1460]: 2026-01-24 00:25:56.235 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:56.302542 containerd[1460]: 2026-01-24 00:25:56.235 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfd19156448 ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:56.302542 containerd[1460]: 2026-01-24 00:25:56.263 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:56.302542 containerd[1460]: 2026-01-24 00:25:56.266 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56", Pod:"coredns-674b8bbfcf-2vn8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfd19156448", MAC:"d6:fa:c8:c6:35:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.302542 containerd[1460]: 2026-01-24 00:25:56.285 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vn8x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:25:56.333856 systemd[1]: Started cri-containerd-a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d.scope - libcontainer container a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d. Jan 24 00:25:56.342753 containerd[1460]: time="2026-01-24T00:25:56.342458038Z" level=info msg="StopPodSandbox for \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\"" Jan 24 00:25:56.392491 containerd[1460]: time="2026-01-24T00:25:56.388176256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:56.392491 containerd[1460]: time="2026-01-24T00:25:56.388943081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:56.392491 containerd[1460]: time="2026-01-24T00:25:56.389285966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.406945 containerd[1460]: time="2026-01-24T00:25:56.398852275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.408611 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:56.424350 systemd-networkd[1371]: calia17d89fb860: Link UP Jan 24 00:25:56.432544 systemd-networkd[1371]: calia17d89fb860: Gained carrier Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:55.868 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0 calico-kube-controllers-7f664d4f9c- calico-system 2b7e3139-1ac0-464d-91ba-3ef9871bf348 1018 0 2026-01-24 00:25:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f664d4f9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f664d4f9c-5l5qb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia17d89fb860 [] [] }} ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:55.868 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.041 [INFO][4321] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" HandleID="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.048 [INFO][4321] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" HandleID="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004342b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f664d4f9c-5l5qb", "timestamp":"2026-01-24 00:25:56.041976914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.048 [INFO][4321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.229 [INFO][4321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.229 [INFO][4321] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.273 [INFO][4321] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.289 [INFO][4321] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.300 [INFO][4321] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.311 [INFO][4321] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.316 [INFO][4321] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.316 [INFO][4321] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.321 [INFO][4321] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.335 [INFO][4321] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.371 [INFO][4321] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.371 [INFO][4321] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" host="localhost" Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.371 [INFO][4321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:56.490273 containerd[1460]: 2026-01-24 00:25:56.371 [INFO][4321] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" HandleID="k8s-pod-network.2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:56.491535 containerd[1460]: 2026-01-24 00:25:56.391 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0", GenerateName:"calico-kube-controllers-7f664d4f9c-", Namespace:"calico-system", SelfLink:"", UID:"2b7e3139-1ac0-464d-91ba-3ef9871bf348", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f664d4f9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f664d4f9c-5l5qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17d89fb860", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.491535 containerd[1460]: 2026-01-24 00:25:56.391 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:56.491535 containerd[1460]: 2026-01-24 00:25:56.391 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia17d89fb860 ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:56.491535 containerd[1460]: 2026-01-24 00:25:56.432 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:56.491535 containerd[1460]: 2026-01-24 00:25:56.433 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0", GenerateName:"calico-kube-controllers-7f664d4f9c-", Namespace:"calico-system", SelfLink:"", UID:"2b7e3139-1ac0-464d-91ba-3ef9871bf348", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f664d4f9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd", Pod:"calico-kube-controllers-7f664d4f9c-5l5qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17d89fb860", MAC:"8a:dc:d3:86:1f:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.491535 containerd[1460]: 2026-01-24 00:25:56.486 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd" Namespace="calico-system" Pod="calico-kube-controllers-7f664d4f9c-5l5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:25:56.517480 systemd[1]: Started cri-containerd-5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56.scope - libcontainer container 5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56. Jan 24 00:25:56.548364 containerd[1460]: time="2026-01-24T00:25:56.548302711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmh9g,Uid:516a3626-ef38-4d36-84e3-1a27e671269b,Namespace:kube-system,Attempt:1,} returns sandbox id \"a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d\"" Jan 24 00:25:56.571350 kubelet[2571]: E0124 00:25:56.571093 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:56.584528 containerd[1460]: time="2026-01-24T00:25:56.584194577Z" level=info msg="CreateContainer within sandbox \"a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:25:56.622540 systemd-networkd[1371]: cali79e02191390: Link UP Jan 24 00:25:56.628518 systemd-networkd[1371]: cali79e02191390: Gained carrier Jan 24 00:25:56.633128 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:56.692275 systemd[1]: run-netns-cni\x2d216ef62f\x2d7b68\x2dc25a\x2dfd7e\x2d858f071e3955.mount: Deactivated successfully. Jan 24 00:25:56.699469 containerd[1460]: time="2026-01-24T00:25:56.691986258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:56.699469 containerd[1460]: time="2026-01-24T00:25:56.692441235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:56.699469 containerd[1460]: time="2026-01-24T00:25:56.692473474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.699469 containerd[1460]: time="2026-01-24T00:25:56.694179463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.692454 systemd[1]: run-netns-cni\x2d2fff943b\x2dbc70\x2d4677\x2dc358\x2dc7aa76c8fadb.mount: Deactivated successfully. Jan 24 00:25:56.705332 containerd[1460]: time="2026-01-24T00:25:56.700918741Z" level=info msg="CreateContainer within sandbox \"a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0aca03ba685a6109f041af22ee90f6564c10bbc0fa610cf2511e19a1ce32de2d\"" Jan 24 00:25:56.705332 containerd[1460]: time="2026-01-24T00:25:56.704672101Z" level=info msg="StartContainer for \"0aca03ba685a6109f041af22ee90f6564c10bbc0fa610cf2511e19a1ce32de2d\"" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:55.955 [INFO][4288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--grfd7-eth0 csi-node-driver- calico-system 677a3c6a-a428-4746-be4d-2080a36b4930 1020 0 2026-01-24 00:25:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-grfd7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali79e02191390 [] [] }} ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:55.962 [INFO][4288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.080 [INFO][4341] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" HandleID="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.081 [INFO][4341] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" HandleID="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-grfd7", "timestamp":"2026-01-24 00:25:56.080252493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.081 [INFO][4341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.371 [INFO][4341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.371 [INFO][4341] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.414 [INFO][4341] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.430 [INFO][4341] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.477 [INFO][4341] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.482 [INFO][4341] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.491 [INFO][4341] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.492 [INFO][4341] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.497 [INFO][4341] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.509 [INFO][4341] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.531 [INFO][4341] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.531 [INFO][4341] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" host="localhost" Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.531 [INFO][4341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:56.786238 containerd[1460]: 2026-01-24 00:25:56.531 [INFO][4341] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" HandleID="k8s-pod-network.c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:56.787281 containerd[1460]: 2026-01-24 00:25:56.570 [INFO][4288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--grfd7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"677a3c6a-a428-4746-be4d-2080a36b4930", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-grfd7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79e02191390", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.787281 containerd[1460]: 2026-01-24 00:25:56.573 [INFO][4288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:56.787281 containerd[1460]: 2026-01-24 00:25:56.573 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79e02191390 ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:56.787281 containerd[1460]: 2026-01-24 00:25:56.633 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:56.787281 containerd[1460]: 2026-01-24 00:25:56.639 [INFO][4288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--grfd7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"677a3c6a-a428-4746-be4d-2080a36b4930", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb", Pod:"csi-node-driver-grfd7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79e02191390", MAC:"26:14:ba:1d:e2:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:56.787281 containerd[1460]: 2026-01-24 00:25:56.736 [INFO][4288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb" Namespace="calico-system" Pod="csi-node-driver-grfd7" WorkloadEndpoint="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:25:56.820920 systemd[1]: Started cri-containerd-0aca03ba685a6109f041af22ee90f6564c10bbc0fa610cf2511e19a1ce32de2d.scope - libcontainer container 0aca03ba685a6109f041af22ee90f6564c10bbc0fa610cf2511e19a1ce32de2d. Jan 24 00:25:56.835878 systemd[1]: Started cri-containerd-2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd.scope - libcontainer container 2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd. Jan 24 00:25:56.846800 containerd[1460]: time="2026-01-24T00:25:56.846266102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vn8x,Uid:0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0,Namespace:kube-system,Attempt:1,} returns sandbox id \"5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56\"" Jan 24 00:25:56.867406 kubelet[2571]: E0124 00:25:56.866988 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:56.898419 containerd[1460]: time="2026-01-24T00:25:56.898362910Z" level=info msg="CreateContainer within sandbox \"5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:25:56.925210 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:56.926827 containerd[1460]: time="2026-01-24T00:25:56.926484206Z" level=info msg="CreateContainer within sandbox \"5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74174d058aa39fc460795642f850a50bb003f86ac5ce21aa7ea2b844c4285923\"" Jan 24 00:25:56.937334 containerd[1460]: time="2026-01-24T00:25:56.935634908Z" level=info msg="StartContainer for \"74174d058aa39fc460795642f850a50bb003f86ac5ce21aa7ea2b844c4285923\"" Jan 24 00:25:56.973137 containerd[1460]: time="2026-01-24T00:25:56.972740666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:56.973137 containerd[1460]: time="2026-01-24T00:25:56.973009145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:56.973313 containerd[1460]: time="2026-01-24T00:25:56.973160736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.974838 containerd[1460]: time="2026-01-24T00:25:56.973320533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:56.989440 systemd-networkd[1371]: calie9035d52da2: Link UP Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.579 [INFO][4451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.580 [INFO][4451] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" iface="eth0" netns="/var/run/netns/cni-41074f9b-bff4-74e5-0b77-14788bb62223" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.584 [INFO][4451] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" iface="eth0" netns="/var/run/netns/cni-41074f9b-bff4-74e5-0b77-14788bb62223" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.589 [INFO][4451] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" iface="eth0" netns="/var/run/netns/cni-41074f9b-bff4-74e5-0b77-14788bb62223" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.589 [INFO][4451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.589 [INFO][4451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.730 [INFO][4505] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.730 [INFO][4505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.886 [INFO][4505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.930 [WARNING][4505] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.930 [INFO][4505] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.942 [INFO][4505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:56.990659 containerd[1460]: 2026-01-24 00:25:56.974 [INFO][4451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:25:56.990891 systemd-networkd[1371]: calie9035d52da2: Gained carrier Jan 24 00:25:56.994253 containerd[1460]: time="2026-01-24T00:25:56.993073518Z" level=info msg="TearDown network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\" successfully" Jan 24 00:25:56.994253 containerd[1460]: time="2026-01-24T00:25:56.993105207Z" level=info msg="StopPodSandbox for \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\" returns successfully" Jan 24 00:25:56.995393 containerd[1460]: time="2026-01-24T00:25:56.994673820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4kf4b,Uid:c9124158-0f90-4bb6-8fd8-7f63bd272b78,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:25:57.004282 containerd[1460]: time="2026-01-24T00:25:57.004120190Z" level=info msg="StartContainer for \"0aca03ba685a6109f041af22ee90f6564c10bbc0fa610cf2511e19a1ce32de2d\" returns successfully" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:55.977 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--jfbl5-eth0 goldmane-666569f655- calico-system 3c957499-b83a-4ee9-8faf-8cc8bcb63fe3 1016 0 2026-01-24 00:25:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-jfbl5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie9035d52da2 [] [] }} ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:55.978 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.104 [INFO][4349] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" HandleID="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.104 [INFO][4349] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" HandleID="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012dbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-jfbl5", "timestamp":"2026-01-24 00:25:56.104293186 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.105 [INFO][4349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.532 [INFO][4349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.532 [INFO][4349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.575 [INFO][4349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.626 [INFO][4349] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.758 [INFO][4349] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.786 [INFO][4349] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.794 [INFO][4349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.794 [INFO][4349] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.800 [INFO][4349] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.824 [INFO][4349] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.875 [INFO][4349] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.882 [INFO][4349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" host="localhost" Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.882 [INFO][4349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:57.030330 containerd[1460]: 2026-01-24 00:25:56.882 [INFO][4349] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" HandleID="k8s-pod-network.700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:57.031226 containerd[1460]: 2026-01-24 00:25:56.922 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jfbl5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-jfbl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9035d52da2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:57.031226 containerd[1460]: 2026-01-24 00:25:56.934 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:57.031226 containerd[1460]: 2026-01-24 00:25:56.934 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9035d52da2 ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:57.031226 containerd[1460]: 2026-01-24 00:25:56.991 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:57.031226 containerd[1460]: 2026-01-24 00:25:56.992 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jfbl5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe", Pod:"goldmane-666569f655-jfbl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9035d52da2", MAC:"fa:d9:f9:56:a7:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:57.031226 containerd[1460]: 2026-01-24 00:25:57.008 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe" Namespace="calico-system" Pod="goldmane-666569f655-jfbl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:25:57.037697 containerd[1460]: time="2026-01-24T00:25:57.037505962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f664d4f9c-5l5qb,Uid:2b7e3139-1ac0-464d-91ba-3ef9871bf348,Namespace:calico-system,Attempt:1,} returns sandbox id \"2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd\"" Jan 24 00:25:57.066909 containerd[1460]: time="2026-01-24T00:25:57.066796978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:25:57.071841 systemd[1]: Started cri-containerd-c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb.scope - libcontainer container c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb. Jan 24 00:25:57.131491 containerd[1460]: time="2026-01-24T00:25:57.127679374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:57.131491 containerd[1460]: time="2026-01-24T00:25:57.127764253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:57.131491 containerd[1460]: time="2026-01-24T00:25:57.127777837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:57.131491 containerd[1460]: time="2026-01-24T00:25:57.127941702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:57.135996 systemd[1]: Started cri-containerd-74174d058aa39fc460795642f850a50bb003f86ac5ce21aa7ea2b844c4285923.scope - libcontainer container 74174d058aa39fc460795642f850a50bb003f86ac5ce21aa7ea2b844c4285923. Jan 24 00:25:57.219829 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:57.241738 systemd[1]: Started cri-containerd-700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe.scope - libcontainer container 700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe. Jan 24 00:25:57.249758 containerd[1460]: time="2026-01-24T00:25:57.249299644Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:57.259823 containerd[1460]: time="2026-01-24T00:25:57.259733552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:25:57.262252 containerd[1460]: time="2026-01-24T00:25:57.261072089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:25:57.262505 kubelet[2571]: E0124 00:25:57.262457 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:25:57.262901 kubelet[2571]: E0124 00:25:57.262517 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:25:57.263221 containerd[1460]: time="2026-01-24T00:25:57.263134378Z" level=info msg="StartContainer for \"74174d058aa39fc460795642f850a50bb003f86ac5ce21aa7ea2b844c4285923\" returns successfully" Jan 24 00:25:57.263702 kubelet[2571]: E0124 00:25:57.263550 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-524nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f664d4f9c-5l5qb_calico-system(2b7e3139-1ac0-464d-91ba-3ef9871bf348): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:57.265050 kubelet[2571]: E0124 00:25:57.264966 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:25:57.279745 systemd-networkd[1371]: califfd19156448: Gained IPv6LL Jan 24 00:25:57.280541 systemd-networkd[1371]: cali9ae0c23f822: Gained IPv6LL Jan 24 00:25:57.322277 containerd[1460]: time="2026-01-24T00:25:57.322173214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grfd7,Uid:677a3c6a-a428-4746-be4d-2080a36b4930,Namespace:calico-system,Attempt:1,} returns sandbox id \"c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb\"" Jan 24 00:25:57.330777 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:57.331201 containerd[1460]: time="2026-01-24T00:25:57.330187631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:25:57.403935 containerd[1460]: time="2026-01-24T00:25:57.403887433Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:57.408982 containerd[1460]: time="2026-01-24T00:25:57.407334951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:25:57.408982 containerd[1460]: time="2026-01-24T00:25:57.407437462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:25:57.409141 kubelet[2571]: E0124 00:25:57.407721 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:25:57.409141 kubelet[2571]: E0124 00:25:57.407904 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:25:57.409141 kubelet[2571]: E0124 00:25:57.408225 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:57.416505 containerd[1460]: time="2026-01-24T00:25:57.416410332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:25:57.429474 containerd[1460]: time="2026-01-24T00:25:57.429372444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfbl5,Uid:3c957499-b83a-4ee9-8faf-8cc8bcb63fe3,Namespace:calico-system,Attempt:1,} returns sandbox id \"700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe\"" Jan 24 00:25:57.457924 systemd-networkd[1371]: calia8ad13ebc51: Link UP Jan 24 00:25:57.460829 systemd-networkd[1371]: calia8ad13ebc51: Gained carrier Jan 24 00:25:57.483242 containerd[1460]: time="2026-01-24T00:25:57.483110285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:57.486061 containerd[1460]: time="2026-01-24T00:25:57.485803227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:25:57.486061 containerd[1460]: time="2026-01-24T00:25:57.485855586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:25:57.486239 kubelet[2571]: E0124 00:25:57.486184 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:25:57.486305 kubelet[2571]: E0124 00:25:57.486240 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:25:57.486509 kubelet[2571]: E0124 00:25:57.486431 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:57.488728 containerd[1460]: time="2026-01-24T00:25:57.488619276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:25:57.489122 kubelet[2571]: E0124 00:25:57.488904 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.145 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0 calico-apiserver-ff5668969- calico-apiserver c9124158-0f90-4bb6-8fd8-7f63bd272b78 1037 0 2026-01-24 00:25:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff5668969 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ff5668969-4kf4b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia8ad13ebc51 [] [] }} ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.146 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.295 [INFO][4695] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" HandleID="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.296 [INFO][4695] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" HandleID="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ff5668969-4kf4b", "timestamp":"2026-01-24 00:25:57.295989955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.296 [INFO][4695] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.297 [INFO][4695] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.297 [INFO][4695] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.325 [INFO][4695] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.386 [INFO][4695] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.401 [INFO][4695] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.407 [INFO][4695] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.420 [INFO][4695] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.420 [INFO][4695] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.425 [INFO][4695] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.437 [INFO][4695] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.449 [INFO][4695] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.450 [INFO][4695] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" host="localhost" Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.450 [INFO][4695] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:57.490120 containerd[1460]: 2026-01-24 00:25:57.450 [INFO][4695] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" HandleID="k8s-pod-network.07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:57.491105 containerd[1460]: 2026-01-24 00:25:57.454 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9124158-0f90-4bb6-8fd8-7f63bd272b78", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ff5668969-4kf4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8ad13ebc51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:57.491105 containerd[1460]: 2026-01-24 00:25:57.455 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:57.491105 containerd[1460]: 2026-01-24 00:25:57.455 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8ad13ebc51 ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:57.491105 containerd[1460]: 2026-01-24 00:25:57.460 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:57.491105 containerd[1460]: 2026-01-24 00:25:57.461 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9124158-0f90-4bb6-8fd8-7f63bd272b78", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e", Pod:"calico-apiserver-ff5668969-4kf4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8ad13ebc51", MAC:"86:a3:a1:f1:5b:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:57.491105 containerd[1460]: 2026-01-24 00:25:57.481 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4kf4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:25:57.538687 containerd[1460]: time="2026-01-24T00:25:57.534989661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:57.538687 containerd[1460]: time="2026-01-24T00:25:57.536328089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:57.538687 containerd[1460]: time="2026-01-24T00:25:57.536341554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:57.538687 containerd[1460]: time="2026-01-24T00:25:57.536454533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:57.557649 containerd[1460]: time="2026-01-24T00:25:57.556413910Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:57.560210 containerd[1460]: time="2026-01-24T00:25:57.559914110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:25:57.560343 containerd[1460]: time="2026-01-24T00:25:57.559950082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:25:57.561194 kubelet[2571]: E0124 00:25:57.560804 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:25:57.561863 kubelet[2571]: E0124 00:25:57.561320 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:25:57.562267 kubelet[2571]: E0124 00:25:57.562151 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95qkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jfbl5_calico-system(3c957499-b83a-4ee9-8faf-8cc8bcb63fe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:57.564840 kubelet[2571]: E0124 00:25:57.564758 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:25:57.572956 systemd[1]: Started cri-containerd-07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e.scope - libcontainer container 07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e. Jan 24 00:25:57.594087 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:57.632303 containerd[1460]: time="2026-01-24T00:25:57.632245624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4kf4b,Uid:c9124158-0f90-4bb6-8fd8-7f63bd272b78,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e\"" Jan 24 00:25:57.634837 containerd[1460]: time="2026-01-24T00:25:57.634770628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:25:57.667553 systemd[1]: run-netns-cni\x2d41074f9b\x2dbff4\x2d74e5\x2d0b77\x2d14788bb62223.mount: Deactivated successfully. Jan 24 00:25:57.708522 containerd[1460]: time="2026-01-24T00:25:57.708376959Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:57.710398 containerd[1460]: time="2026-01-24T00:25:57.710327247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:25:57.710498 containerd[1460]: time="2026-01-24T00:25:57.710372341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:25:57.710915 kubelet[2571]: E0124 00:25:57.710776 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:25:57.710915 kubelet[2571]: E0124 00:25:57.710895 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:25:57.711255 kubelet[2571]: E0124 00:25:57.711172 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4kf4b_calico-apiserver(c9124158-0f90-4bb6-8fd8-7f63bd272b78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:57.712684 kubelet[2571]: E0124 00:25:57.712516 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:25:57.896755 kubelet[2571]: E0124 00:25:57.896506 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:25:57.903963 kubelet[2571]: E0124 00:25:57.903723 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:25:57.910409 kubelet[2571]: E0124 00:25:57.910279 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:57.916497 kubelet[2571]: E0124 00:25:57.916383 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:57.925480 kubelet[2571]: E0124 00:25:57.925358 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:57.935270 kubelet[2571]: E0124 00:25:57.935195 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:25:57.940352 kubelet[2571]: I0124 00:25:57.939999 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xmh9g" podStartSLOduration=40.939980419 podStartE2EDuration="40.939980419s" podCreationTimestamp="2026-01-24 00:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:57.936276578 +0000 UTC m=+43.776810429" watchObservedRunningTime="2026-01-24 00:25:57.939980419 +0000 UTC m=+43.780514060" Jan 24 00:25:58.030690 kubelet[2571]: I0124 00:25:58.030437 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2vn8x" podStartSLOduration=41.030412287 podStartE2EDuration="41.030412287s" podCreationTimestamp="2026-01-24 00:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:58.02963072 +0000 UTC m=+43.870164361" watchObservedRunningTime="2026-01-24 00:25:58.030412287 +0000 UTC m=+43.870945927" Jan 24 00:25:58.045924 systemd-networkd[1371]: calia17d89fb860: Gained IPv6LL Jan 24 00:25:58.173166 systemd-networkd[1371]: calie9035d52da2: Gained IPv6LL Jan 24 00:25:58.340008 containerd[1460]: time="2026-01-24T00:25:58.338878837Z" level=info msg="StopPodSandbox for \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\"" Jan 24 00:25:58.494354 systemd-networkd[1371]: cali79e02191390: Gained IPv6LL Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.428 [INFO][4824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.428 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" iface="eth0" netns="/var/run/netns/cni-e489fc93-97a1-f17c-1707-d2b9b5d6834e" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.429 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" iface="eth0" netns="/var/run/netns/cni-e489fc93-97a1-f17c-1707-d2b9b5d6834e" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.429 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" iface="eth0" netns="/var/run/netns/cni-e489fc93-97a1-f17c-1707-d2b9b5d6834e" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.429 [INFO][4824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.429 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.481 [INFO][4833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.482 [INFO][4833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.482 [INFO][4833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.490 [WARNING][4833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.490 [INFO][4833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.493 [INFO][4833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:58.504279 containerd[1460]: 2026-01-24 00:25:58.500 [INFO][4824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:25:58.508364 containerd[1460]: time="2026-01-24T00:25:58.505987999Z" level=info msg="TearDown network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\" successfully" Jan 24 00:25:58.508364 containerd[1460]: time="2026-01-24T00:25:58.506079891Z" level=info msg="StopPodSandbox for \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\" returns successfully" Jan 24 00:25:58.508364 containerd[1460]: time="2026-01-24T00:25:58.507421889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4dlrd,Uid:40ee8f0f-9c75-4f11-bb2e-9eb000639316,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:25:58.510528 systemd[1]: run-netns-cni\x2de489fc93\x2d97a1\x2df17c\x2d1707\x2dd2b9b5d6834e.mount: Deactivated successfully. Jan 24 00:25:58.620894 systemd-networkd[1371]: calia8ad13ebc51: Gained IPv6LL Jan 24 00:25:58.701743 systemd-networkd[1371]: calieb8a346372f: Link UP Jan 24 00:25:58.703724 systemd-networkd[1371]: calieb8a346372f: Gained carrier Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.606 [INFO][4842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0 calico-apiserver-ff5668969- calico-apiserver 40ee8f0f-9c75-4f11-bb2e-9eb000639316 1106 0 2026-01-24 00:25:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff5668969 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ff5668969-4dlrd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb8a346372f [] [] }} ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.606 [INFO][4842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.639 [INFO][4857] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" HandleID="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.639 [INFO][4857] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" HandleID="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134df0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ff5668969-4dlrd", "timestamp":"2026-01-24 00:25:58.639777191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.640 [INFO][4857] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.640 [INFO][4857] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.640 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.652 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.663 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.672 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.675 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.678 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.678 [INFO][4857] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.680 [INFO][4857] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9 Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.685 [INFO][4857] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.693 [INFO][4857] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.693 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" host="localhost" Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.693 [INFO][4857] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:25:58.723066 containerd[1460]: 2026-01-24 00:25:58.693 [INFO][4857] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" HandleID="k8s-pod-network.5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.724539 containerd[1460]: 2026-01-24 00:25:58.698 [INFO][4842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ee8f0f-9c75-4f11-bb2e-9eb000639316", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ff5668969-4dlrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb8a346372f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:58.724539 containerd[1460]: 2026-01-24 00:25:58.698 [INFO][4842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.724539 containerd[1460]: 2026-01-24 00:25:58.698 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb8a346372f ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.724539 containerd[1460]: 2026-01-24 00:25:58.703 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.724539 containerd[1460]: 2026-01-24 00:25:58.704 [INFO][4842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ee8f0f-9c75-4f11-bb2e-9eb000639316", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9", Pod:"calico-apiserver-ff5668969-4dlrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb8a346372f", MAC:"0a:7d:2c:dc:bb:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:25:58.724539 containerd[1460]: 2026-01-24 00:25:58.719 [INFO][4842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9" Namespace="calico-apiserver" Pod="calico-apiserver-ff5668969-4dlrd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:25:58.758720 containerd[1460]: time="2026-01-24T00:25:58.758244787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:25:58.758720 containerd[1460]: time="2026-01-24T00:25:58.758338891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:25:58.758720 containerd[1460]: time="2026-01-24T00:25:58.758357486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:58.758720 containerd[1460]: time="2026-01-24T00:25:58.758484052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:25:58.791837 systemd[1]: Started cri-containerd-5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9.scope - libcontainer container 5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9. Jan 24 00:25:58.809806 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:25:58.842542 containerd[1460]: time="2026-01-24T00:25:58.842441361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff5668969-4dlrd,Uid:40ee8f0f-9c75-4f11-bb2e-9eb000639316,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9\"" Jan 24 00:25:58.846091 containerd[1460]: time="2026-01-24T00:25:58.845074902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:25:58.903581 containerd[1460]: time="2026-01-24T00:25:58.903406235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:25:58.905653 containerd[1460]: time="2026-01-24T00:25:58.905504804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:25:58.905891 containerd[1460]: time="2026-01-24T00:25:58.905673198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:25:58.905957 kubelet[2571]: E0124 00:25:58.905855 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:25:58.905957 kubelet[2571]: E0124 00:25:58.905924 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:25:58.907093 kubelet[2571]: E0124 00:25:58.906200 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jz9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4dlrd_calico-apiserver(40ee8f0f-9c75-4f11-bb2e-9eb000639316): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:25:58.908098 kubelet[2571]: E0124 00:25:58.907982 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:25:58.939535 kubelet[2571]: E0124 00:25:58.939294 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:58.940631 kubelet[2571]: E0124 00:25:58.939971 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:58.940946 kubelet[2571]: E0124 00:25:58.940864 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:25:58.941292 kubelet[2571]: E0124 00:25:58.941136 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:25:58.941292 kubelet[2571]: E0124 00:25:58.941178 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:25:58.941890 kubelet[2571]: E0124 00:25:58.941790 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:25:58.945360 kubelet[2571]: E0124 00:25:58.945297 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:25:59.941985 kubelet[2571]: E0124 00:25:59.941489 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:25:59.942506 kubelet[2571]: E0124 00:25:59.942229 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:25:59.957236 kubelet[2571]: E0124 00:25:59.951506 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:00.733136 systemd-networkd[1371]: calieb8a346372f: Gained IPv6LL Jan 24 00:26:06.341793 containerd[1460]: time="2026-01-24T00:26:06.341484850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:26:06.422785 containerd[1460]: time="2026-01-24T00:26:06.422518316Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:06.424296 containerd[1460]: time="2026-01-24T00:26:06.424137256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:26:06.424296 containerd[1460]: time="2026-01-24T00:26:06.424276645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:26:06.424540 kubelet[2571]: E0124 00:26:06.424440 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:26:06.425329 kubelet[2571]: E0124 00:26:06.424508 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:26:06.425329 kubelet[2571]: E0124 00:26:06.424900 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7b482aad45a047b28315ef7e942c8a90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:06.428470 containerd[1460]: time="2026-01-24T00:26:06.428380464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:26:06.498856 containerd[1460]: time="2026-01-24T00:26:06.498741099Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:06.501402 containerd[1460]: time="2026-01-24T00:26:06.501306739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:26:06.501675 containerd[1460]: time="2026-01-24T00:26:06.501398580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:26:06.501801 kubelet[2571]: E0124 00:26:06.501711 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:26:06.501801 kubelet[2571]: E0124 00:26:06.501773 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:26:06.502385 kubelet[2571]: E0124 00:26:06.501900 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:06.503152 kubelet[2571]: E0124 00:26:06.502986 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:26:10.338982 containerd[1460]: time="2026-01-24T00:26:10.338890634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:26:10.404861 containerd[1460]: time="2026-01-24T00:26:10.404550696Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:10.406953 containerd[1460]: time="2026-01-24T00:26:10.406724374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:26:10.406953 containerd[1460]: time="2026-01-24T00:26:10.406842224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:26:10.407251 kubelet[2571]: E0124 00:26:10.407184 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:26:10.407251 kubelet[2571]: E0124 00:26:10.407243 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:26:10.408204 kubelet[2571]: E0124 00:26:10.407804 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-524nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f664d4f9c-5l5qb_calico-system(2b7e3139-1ac0-464d-91ba-3ef9871bf348): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:10.408528 containerd[1460]: time="2026-01-24T00:26:10.407675883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:26:10.409604 kubelet[2571]: E0124 00:26:10.409485 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:26:10.470538 containerd[1460]: time="2026-01-24T00:26:10.470409944Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:10.472545 containerd[1460]: time="2026-01-24T00:26:10.472426447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:26:10.472772 containerd[1460]: time="2026-01-24T00:26:10.472472953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:26:10.473113 kubelet[2571]: E0124 00:26:10.472988 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:26:10.473113 kubelet[2571]: E0124 00:26:10.473098 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:26:10.473378 kubelet[2571]: E0124 00:26:10.473281 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95qkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jfbl5_calico-system(3c957499-b83a-4ee9-8faf-8cc8bcb63fe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:10.474824 kubelet[2571]: E0124 00:26:10.474699 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:26:11.339256 containerd[1460]: time="2026-01-24T00:26:11.339002107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:26:11.403695 containerd[1460]: time="2026-01-24T00:26:11.403411070Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:11.405638 containerd[1460]: time="2026-01-24T00:26:11.405400961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:26:11.405776 containerd[1460]: time="2026-01-24T00:26:11.405530639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:26:11.406234 kubelet[2571]: E0124 00:26:11.405951 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:11.406234 kubelet[2571]: E0124 00:26:11.406109 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:11.406480 kubelet[2571]: E0124 00:26:11.406341 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jz9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4dlrd_calico-apiserver(40ee8f0f-9c75-4f11-bb2e-9eb000639316): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:11.408377 kubelet[2571]: E0124 00:26:11.408180 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:26:12.341733 containerd[1460]: time="2026-01-24T00:26:12.340735790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:26:12.407397 containerd[1460]: time="2026-01-24T00:26:12.407252376Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:12.409274 containerd[1460]: time="2026-01-24T00:26:12.409143595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:26:12.409274 containerd[1460]: time="2026-01-24T00:26:12.409217632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:26:12.409682 kubelet[2571]: E0124 00:26:12.409488 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:26:12.410283 kubelet[2571]: E0124 00:26:12.409667 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:26:12.410283 kubelet[2571]: E0124 00:26:12.409979 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:12.412517 containerd[1460]: time="2026-01-24T00:26:12.412428231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:26:12.487354 containerd[1460]: time="2026-01-24T00:26:12.487108596Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:12.488952 containerd[1460]: time="2026-01-24T00:26:12.488874790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:26:12.489086 containerd[1460]: time="2026-01-24T00:26:12.488943713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:26:12.489430 kubelet[2571]: E0124 00:26:12.489305 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:26:12.489430 kubelet[2571]: E0124 00:26:12.489373 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:26:12.489729 kubelet[2571]: E0124 00:26:12.489535 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:12.491224 kubelet[2571]: E0124 00:26:12.491092 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:26:14.298448 containerd[1460]: time="2026-01-24T00:26:14.298170979Z" level=info msg="StopPodSandbox for \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\"" Jan 24 00:26:14.347860 containerd[1460]: time="2026-01-24T00:26:14.344267776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:26:14.447298 containerd[1460]: time="2026-01-24T00:26:14.447218686Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:14.479220 containerd[1460]: time="2026-01-24T00:26:14.478843317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:26:14.479410 containerd[1460]: time="2026-01-24T00:26:14.479188071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:26:14.480122 kubelet[2571]: E0124 00:26:14.479938 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:14.480846 kubelet[2571]: E0124 00:26:14.480164 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:14.480846 kubelet[2571]: E0124 00:26:14.480348 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4kf4b_calico-apiserver(c9124158-0f90-4bb6-8fd8-7f63bd272b78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:14.482245 kubelet[2571]: E0124 00:26:14.482203 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.417 [WARNING][4945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9124158-0f90-4bb6-8fd8-7f63bd272b78", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e", Pod:"calico-apiserver-ff5668969-4kf4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8ad13ebc51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.418 [INFO][4945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.418 [INFO][4945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" iface="eth0" netns="" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.418 [INFO][4945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.418 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.507 [INFO][4955] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.509 [INFO][4955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.509 [INFO][4955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.521 [WARNING][4955] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.521 [INFO][4955] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.526 [INFO][4955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:14.533175 containerd[1460]: 2026-01-24 00:26:14.530 [INFO][4945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.533175 containerd[1460]: time="2026-01-24T00:26:14.533128504Z" level=info msg="TearDown network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\" successfully" Jan 24 00:26:14.533175 containerd[1460]: time="2026-01-24T00:26:14.533165854Z" level=info msg="StopPodSandbox for \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\" returns successfully" Jan 24 00:26:14.542553 containerd[1460]: time="2026-01-24T00:26:14.542311772Z" level=info msg="RemovePodSandbox for \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\"" Jan 24 00:26:14.552384 containerd[1460]: time="2026-01-24T00:26:14.551978184Z" level=info msg="Forcibly stopping sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\"" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.649 [WARNING][4973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9124158-0f90-4bb6-8fd8-7f63bd272b78", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07960a78ffe6d24a1d1ccba36269f8a8887a405834f68d25a3294921f6d1ce3e", Pod:"calico-apiserver-ff5668969-4kf4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8ad13ebc51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.651 [INFO][4973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.651 [INFO][4973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" iface="eth0" netns="" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.651 [INFO][4973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.651 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.734 [INFO][4982] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.734 [INFO][4982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.735 [INFO][4982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.749 [WARNING][4982] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.750 [INFO][4982] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" HandleID="k8s-pod-network.825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Workload="localhost-k8s-calico--apiserver--ff5668969--4kf4b-eth0" Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.771 [INFO][4982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:14.781317 containerd[1460]: 2026-01-24 00:26:14.776 [INFO][4973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f" Jan 24 00:26:14.782335 containerd[1460]: time="2026-01-24T00:26:14.781368445Z" level=info msg="TearDown network for sandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\" successfully" Jan 24 00:26:14.810317 containerd[1460]: time="2026-01-24T00:26:14.808755103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:14.810317 containerd[1460]: time="2026-01-24T00:26:14.808861150Z" level=info msg="RemovePodSandbox \"825168c5a3db1878f8c9a38fbcce9e9142bb25f141f55bae51f2dd22a052d70f\" returns successfully" Jan 24 00:26:14.810317 containerd[1460]: time="2026-01-24T00:26:14.809737109Z" level=info msg="StopPodSandbox for \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\"" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.907 [WARNING][5000] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jfbl5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe", Pod:"goldmane-666569f655-jfbl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9035d52da2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.908 [INFO][5000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.908 [INFO][5000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" iface="eth0" netns="" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.908 [INFO][5000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.908 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.981 [INFO][5009] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.982 [INFO][5009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.982 [INFO][5009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.998 [WARNING][5009] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:14.999 [INFO][5009] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:15.003 [INFO][5009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:15.012670 containerd[1460]: 2026-01-24 00:26:15.008 [INFO][5000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.013541 containerd[1460]: time="2026-01-24T00:26:15.012715477Z" level=info msg="TearDown network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\" successfully" Jan 24 00:26:15.013541 containerd[1460]: time="2026-01-24T00:26:15.012744601Z" level=info msg="StopPodSandbox for \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\" returns successfully" Jan 24 00:26:15.014333 containerd[1460]: time="2026-01-24T00:26:15.014268095Z" level=info msg="RemovePodSandbox for \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\"" Jan 24 00:26:15.014385 containerd[1460]: time="2026-01-24T00:26:15.014335451Z" level=info msg="Forcibly stopping sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\"" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.098 [WARNING][5026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jfbl5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3c957499-b83a-4ee9-8faf-8cc8bcb63fe3", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"700cce7c46f0c1ad4d5da3ac56bc8394e07258adf87ef72b4dde9b34769a2ffe", Pod:"goldmane-666569f655-jfbl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9035d52da2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.100 [INFO][5026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.100 [INFO][5026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" iface="eth0" netns="" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.100 [INFO][5026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.100 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.142 [INFO][5035] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.142 [INFO][5035] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.142 [INFO][5035] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.157 [WARNING][5035] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.157 [INFO][5035] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" HandleID="k8s-pod-network.5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Workload="localhost-k8s-goldmane--666569f655--jfbl5-eth0" Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.163 [INFO][5035] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:15.171283 containerd[1460]: 2026-01-24 00:26:15.167 [INFO][5026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02" Jan 24 00:26:15.171283 containerd[1460]: time="2026-01-24T00:26:15.171203807Z" level=info msg="TearDown network for sandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\" successfully" Jan 24 00:26:15.178795 containerd[1460]: time="2026-01-24T00:26:15.178690390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:15.178795 containerd[1460]: time="2026-01-24T00:26:15.178775278Z" level=info msg="RemovePodSandbox \"5beec7cb0be96cfdd119d3fc22904367dc25cc0a1d0c12d3c8f1a40386e41e02\" returns successfully" Jan 24 00:26:15.179780 containerd[1460]: time="2026-01-24T00:26:15.179696041Z" level=info msg="StopPodSandbox for \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\"" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.256 [WARNING][5052] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--grfd7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"677a3c6a-a428-4746-be4d-2080a36b4930", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb", Pod:"csi-node-driver-grfd7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79e02191390", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.256 [INFO][5052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.256 [INFO][5052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" iface="eth0" netns="" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.256 [INFO][5052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.256 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.299 [INFO][5060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.299 [INFO][5060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.299 [INFO][5060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.308 [WARNING][5060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.308 [INFO][5060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.314 [INFO][5060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:15.321283 containerd[1460]: 2026-01-24 00:26:15.317 [INFO][5052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.322470 containerd[1460]: time="2026-01-24T00:26:15.321324059Z" level=info msg="TearDown network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\" successfully" Jan 24 00:26:15.322470 containerd[1460]: time="2026-01-24T00:26:15.321367559Z" level=info msg="StopPodSandbox for \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\" returns successfully" Jan 24 00:26:15.322470 containerd[1460]: time="2026-01-24T00:26:15.322425182Z" level=info msg="RemovePodSandbox for \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\"" Jan 24 00:26:15.322470 containerd[1460]: time="2026-01-24T00:26:15.322463122Z" level=info msg="Forcibly stopping sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\"" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.391 [WARNING][5077] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--grfd7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"677a3c6a-a428-4746-be4d-2080a36b4930", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c529cbd2698c76126492c62396c30966b314d2a901c694d3fd94d1c6d02edcdb", Pod:"csi-node-driver-grfd7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79e02191390", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.391 [INFO][5077] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.392 [INFO][5077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" iface="eth0" netns="" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.392 [INFO][5077] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.392 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.440 [INFO][5085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.440 [INFO][5085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.440 [INFO][5085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.448 [WARNING][5085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.449 [INFO][5085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" HandleID="k8s-pod-network.e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Workload="localhost-k8s-csi--node--driver--grfd7-eth0" Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.451 [INFO][5085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:15.457197 containerd[1460]: 2026-01-24 00:26:15.454 [INFO][5077] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2" Jan 24 00:26:15.457197 containerd[1460]: time="2026-01-24T00:26:15.457153169Z" level=info msg="TearDown network for sandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\" successfully" Jan 24 00:26:15.463218 containerd[1460]: time="2026-01-24T00:26:15.463086075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:15.463218 containerd[1460]: time="2026-01-24T00:26:15.463180422Z" level=info msg="RemovePodSandbox \"e5036219dff6f7e74d3129823c3e187fbdf36ca76ca9900c8368b389d7e311a2\" returns successfully" Jan 24 00:26:15.464077 containerd[1460]: time="2026-01-24T00:26:15.463959809Z" level=info msg="StopPodSandbox for \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\"" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.533 [WARNING][5102] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0", GenerateName:"calico-kube-controllers-7f664d4f9c-", Namespace:"calico-system", SelfLink:"", UID:"2b7e3139-1ac0-464d-91ba-3ef9871bf348", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f664d4f9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd", Pod:"calico-kube-controllers-7f664d4f9c-5l5qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17d89fb860", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.533 [INFO][5102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.533 [INFO][5102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" iface="eth0" netns="" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.533 [INFO][5102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.533 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.574 [INFO][5110] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.574 [INFO][5110] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.574 [INFO][5110] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.586 [WARNING][5110] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.586 [INFO][5110] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.590 [INFO][5110] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:15.598441 containerd[1460]: 2026-01-24 00:26:15.594 [INFO][5102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.599241 containerd[1460]: time="2026-01-24T00:26:15.598474047Z" level=info msg="TearDown network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\" successfully" Jan 24 00:26:15.599241 containerd[1460]: time="2026-01-24T00:26:15.598512879Z" level=info msg="StopPodSandbox for \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\" returns successfully" Jan 24 00:26:15.599732 containerd[1460]: time="2026-01-24T00:26:15.599483436Z" level=info msg="RemovePodSandbox for \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\"" Jan 24 00:26:15.599732 containerd[1460]: time="2026-01-24T00:26:15.599720667Z" level=info msg="Forcibly stopping sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\"" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.661 [WARNING][5129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0", GenerateName:"calico-kube-controllers-7f664d4f9c-", Namespace:"calico-system", SelfLink:"", UID:"2b7e3139-1ac0-464d-91ba-3ef9871bf348", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f664d4f9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2acddb69680da3d970c5d27ab2e65d27878c05e8fbb47aa43a3c47b26be8fffd", Pod:"calico-kube-controllers-7f664d4f9c-5l5qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17d89fb860", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.662 [INFO][5129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.662 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" iface="eth0" netns="" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.662 [INFO][5129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.662 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.698 [INFO][5138] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.699 [INFO][5138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.699 [INFO][5138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.709 [WARNING][5138] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.709 [INFO][5138] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" HandleID="k8s-pod-network.e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Workload="localhost-k8s-calico--kube--controllers--7f664d4f9c--5l5qb-eth0" Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.714 [INFO][5138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:15.719865 containerd[1460]: 2026-01-24 00:26:15.716 [INFO][5129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5" Jan 24 00:26:15.719865 containerd[1460]: time="2026-01-24T00:26:15.719800484Z" level=info msg="TearDown network for sandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\" successfully" Jan 24 00:26:15.726712 containerd[1460]: time="2026-01-24T00:26:15.726338382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:15.726712 containerd[1460]: time="2026-01-24T00:26:15.726431616Z" level=info msg="RemovePodSandbox \"e58eb5771393251e8b3bf522f8c9e0164ee0d4e9a5232791d478d506ffd3fcf5\" returns successfully" Jan 24 00:26:15.727500 containerd[1460]: time="2026-01-24T00:26:15.727407271Z" level=info msg="StopPodSandbox for \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\"" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.799 [WARNING][5155] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56", Pod:"coredns-674b8bbfcf-2vn8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfd19156448", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.800 [INFO][5155] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.800 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" iface="eth0" netns="" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.800 [INFO][5155] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.800 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.839 [INFO][5163] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.840 [INFO][5163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.840 [INFO][5163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.864 [WARNING][5163] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.864 [INFO][5163] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.867 [INFO][5163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:15.875188 containerd[1460]: 2026-01-24 00:26:15.871 [INFO][5155] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:15.875188 containerd[1460]: time="2026-01-24T00:26:15.875028900Z" level=info msg="TearDown network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\" successfully" Jan 24 00:26:15.875188 containerd[1460]: time="2026-01-24T00:26:15.875118907Z" level=info msg="StopPodSandbox for \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\" returns successfully" Jan 24 00:26:15.876228 containerd[1460]: time="2026-01-24T00:26:15.876112065Z" level=info msg="RemovePodSandbox for \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\"" Jan 24 00:26:15.876228 containerd[1460]: time="2026-01-24T00:26:15.876149535Z" level=info msg="Forcibly stopping sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\"" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:15.941 [WARNING][5182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e5f8f70-b739-49dd-97ec-b14f3f8b9ba0", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ca925facdf6eb6cc951d1873cbd10b74e05c567bc289f90e6d8fd60efbd9d56", Pod:"coredns-674b8bbfcf-2vn8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfd19156448", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:15.941 [INFO][5182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:15.942 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" iface="eth0" netns="" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:15.942 [INFO][5182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:15.942 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:16.000 [INFO][5191] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:16.001 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:16.001 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:16.009 [WARNING][5191] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:16.009 [INFO][5191] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" HandleID="k8s-pod-network.8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Workload="localhost-k8s-coredns--674b8bbfcf--2vn8x-eth0" Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:16.012 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:16.019288 containerd[1460]: 2026-01-24 00:26:16.016 [INFO][5182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225" Jan 24 00:26:16.019288 containerd[1460]: time="2026-01-24T00:26:16.019186674Z" level=info msg="TearDown network for sandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\" successfully" Jan 24 00:26:16.024611 containerd[1460]: time="2026-01-24T00:26:16.024533181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:16.024720 containerd[1460]: time="2026-01-24T00:26:16.024691656Z" level=info msg="RemovePodSandbox \"8dcdae7285e55fc658d45d8825d9dcba52a247c938a6b98234114ed4efc30225\" returns successfully" Jan 24 00:26:16.025860 containerd[1460]: time="2026-01-24T00:26:16.025782000Z" level=info msg="StopPodSandbox for \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\"" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.103 [WARNING][5209] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ee8f0f-9c75-4f11-bb2e-9eb000639316", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9", Pod:"calico-apiserver-ff5668969-4dlrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb8a346372f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.103 [INFO][5209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.103 [INFO][5209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" iface="eth0" netns="" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.104 [INFO][5209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.104 [INFO][5209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.145 [INFO][5218] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.145 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.146 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.174 [WARNING][5218] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.174 [INFO][5218] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.177 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:16.185479 containerd[1460]: 2026-01-24 00:26:16.181 [INFO][5209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.185479 containerd[1460]: time="2026-01-24T00:26:16.185472899Z" level=info msg="TearDown network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\" successfully" Jan 24 00:26:16.186150 containerd[1460]: time="2026-01-24T00:26:16.185512924Z" level=info msg="StopPodSandbox for \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\" returns successfully" Jan 24 00:26:16.186889 containerd[1460]: time="2026-01-24T00:26:16.186771385Z" level=info msg="RemovePodSandbox for \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\"" Jan 24 00:26:16.186889 containerd[1460]: time="2026-01-24T00:26:16.186813002Z" level=info msg="Forcibly stopping sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\"" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.256 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0", GenerateName:"calico-apiserver-ff5668969-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ee8f0f-9c75-4f11-bb2e-9eb000639316", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff5668969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e68a9e78b0b4af76a63991aeef51262d3593775bdb544e2f7f9db931a8d79d9", Pod:"calico-apiserver-ff5668969-4dlrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb8a346372f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.267 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.267 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" iface="eth0" netns="" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.267 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.267 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.310 [INFO][5246] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.310 [INFO][5246] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.310 [INFO][5246] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.317 [WARNING][5246] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.318 [INFO][5246] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" HandleID="k8s-pod-network.f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Workload="localhost-k8s-calico--apiserver--ff5668969--4dlrd-eth0" Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.320 [INFO][5246] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:16.328298 containerd[1460]: 2026-01-24 00:26:16.324 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21" Jan 24 00:26:16.328298 containerd[1460]: time="2026-01-24T00:26:16.328233806Z" level=info msg="TearDown network for sandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\" successfully" Jan 24 00:26:16.332868 containerd[1460]: time="2026-01-24T00:26:16.332749643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:16.332868 containerd[1460]: time="2026-01-24T00:26:16.332823910Z" level=info msg="RemovePodSandbox \"f30705272719cd47f8872ab7ddc9c811a326699be7239b786a03f0d0f8b45b21\" returns successfully" Jan 24 00:26:16.333717 containerd[1460]: time="2026-01-24T00:26:16.333682266Z" level=info msg="StopPodSandbox for \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\"" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.408 [WARNING][5263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"516a3626-ef38-4d36-84e3-1a27e671269b", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d", Pod:"coredns-674b8bbfcf-xmh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae0c23f822", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.409 [INFO][5263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.409 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" iface="eth0" netns="" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.409 [INFO][5263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.409 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.442 [INFO][5272] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.442 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.442 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.458 [WARNING][5272] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.458 [INFO][5272] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.461 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:16.468291 containerd[1460]: 2026-01-24 00:26:16.465 [INFO][5263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.468291 containerd[1460]: time="2026-01-24T00:26:16.468207319Z" level=info msg="TearDown network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\" successfully" Jan 24 00:26:16.468291 containerd[1460]: time="2026-01-24T00:26:16.468246202Z" level=info msg="StopPodSandbox for \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\" returns successfully" Jan 24 00:26:16.469863 containerd[1460]: time="2026-01-24T00:26:16.469750563Z" level=info msg="RemovePodSandbox for \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\"" Jan 24 00:26:16.469863 containerd[1460]: time="2026-01-24T00:26:16.469791709Z" level=info msg="Forcibly stopping sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\"" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.536 [WARNING][5290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"516a3626-ef38-4d36-84e3-1a27e671269b", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a279d116600c58e978bd6c971c1b3f34302906265c7e94c10cde94ca5ef9c95d", Pod:"coredns-674b8bbfcf-xmh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae0c23f822", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.536 [INFO][5290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.536 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" iface="eth0" netns="" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.536 [INFO][5290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.536 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.566 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.566 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.566 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.574 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.574 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" HandleID="k8s-pod-network.326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Workload="localhost-k8s-coredns--674b8bbfcf--xmh9g-eth0" Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.577 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:16.585824 containerd[1460]: 2026-01-24 00:26:16.581 [INFO][5290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5" Jan 24 00:26:16.585824 containerd[1460]: time="2026-01-24T00:26:16.584697084Z" level=info msg="TearDown network for sandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\" successfully" Jan 24 00:26:16.591141 containerd[1460]: time="2026-01-24T00:26:16.591086993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:16.591205 containerd[1460]: time="2026-01-24T00:26:16.591165409Z" level=info msg="RemovePodSandbox \"326e7e5bd404a93a5eee88880b4970aaf5aaf5225a29257acd3a16ceab6211c5\" returns successfully" Jan 24 00:26:16.592188 containerd[1460]: time="2026-01-24T00:26:16.592026861Z" level=info msg="StopPodSandbox for \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\"" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.647 [WARNING][5316] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" WorkloadEndpoint="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.647 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.647 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" iface="eth0" netns="" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.647 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.647 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.680 [INFO][5325] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.680 [INFO][5325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.680 [INFO][5325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.690 [WARNING][5325] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.690 [INFO][5325] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.694 [INFO][5325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:16.702073 containerd[1460]: 2026-01-24 00:26:16.698 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.702471 containerd[1460]: time="2026-01-24T00:26:16.702157738Z" level=info msg="TearDown network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\" successfully" Jan 24 00:26:16.702471 containerd[1460]: time="2026-01-24T00:26:16.702195429Z" level=info msg="StopPodSandbox for \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\" returns successfully" Jan 24 00:26:16.703310 containerd[1460]: time="2026-01-24T00:26:16.703214057Z" level=info msg="RemovePodSandbox for \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\"" Jan 24 00:26:16.703310 containerd[1460]: time="2026-01-24T00:26:16.703275531Z" level=info msg="Forcibly stopping sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\"" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.758 [WARNING][5343] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" WorkloadEndpoint="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.759 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.759 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" iface="eth0" netns="" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.759 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.759 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.788 [INFO][5352] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.789 [INFO][5352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.789 [INFO][5352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.799 [WARNING][5352] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.799 [INFO][5352] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" HandleID="k8s-pod-network.9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Workload="localhost-k8s-whisker--5dd9f46c89--fjr7r-eth0" Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.802 [INFO][5352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:26:16.808647 containerd[1460]: 2026-01-24 00:26:16.805 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516" Jan 24 00:26:16.808647 containerd[1460]: time="2026-01-24T00:26:16.808532454Z" level=info msg="TearDown network for sandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\" successfully" Jan 24 00:26:16.814834 containerd[1460]: time="2026-01-24T00:26:16.814744182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:26:16.815093 containerd[1460]: time="2026-01-24T00:26:16.814852043Z" level=info msg="RemovePodSandbox \"9f5f1a00b0c1c9c4c3dd3e7945457799648b2263533fbabec2e4831283789516\" returns successfully" Jan 24 00:26:18.641219 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:41920.service - OpenSSH per-connection server daemon (10.0.0.1:41920). Jan 24 00:26:18.707677 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 41920 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:18.710639 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:18.719669 systemd-logind[1443]: New session 10 of user core. Jan 24 00:26:18.727873 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:26:18.925363 sshd[5364]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:18.933395 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:41920.service: Deactivated successfully. Jan 24 00:26:18.937184 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:26:18.938785 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:26:18.941183 systemd-logind[1443]: Removed session 10. Jan 24 00:26:20.812721 kubelet[2571]: E0124 00:26:20.812537 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:21.096735 kubelet[2571]: E0124 00:26:21.096452 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:21.106033 systemd[1]: run-containerd-runc-k8s.io-ed3bc4037a8b895e9db0cb479d660a91ff3ca784e01162cb998eae12c65a745e-runc.eJXoyO.mount: Deactivated successfully. Jan 24 00:26:21.340462 kubelet[2571]: E0124 00:26:21.340290 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:26:22.338350 kubelet[2571]: E0124 00:26:22.338190 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:22.340163 kubelet[2571]: E0124 00:26:22.339669 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:26:23.339904 kubelet[2571]: E0124 00:26:23.339743 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:26:23.951413 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:41926.service - OpenSSH per-connection server daemon (10.0.0.1:41926). Jan 24 00:26:24.034870 sshd[5435]: Accepted publickey for core from 10.0.0.1 port 41926 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:24.037435 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:24.047021 systemd-logind[1443]: New session 11 of user core. Jan 24 00:26:24.061948 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:26:24.352394 kubelet[2571]: E0124 00:26:24.351197 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:26:24.354797 sshd[5435]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:24.435364 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:41926.service: Deactivated successfully. Jan 24 00:26:24.452266 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:26:24.456152 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:26:24.459022 systemd-logind[1443]: Removed session 11. Jan 24 00:26:25.349021 kubelet[2571]: E0124 00:26:25.346474 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:26:28.339052 kubelet[2571]: E0124 00:26:28.338890 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:26:29.338627 kubelet[2571]: E0124 00:26:29.338425 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:29.374193 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:33662.service - OpenSSH per-connection server daemon (10.0.0.1:33662). Jan 24 00:26:29.454453 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 33662 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:29.457352 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:29.468774 systemd-logind[1443]: New session 12 of user core. Jan 24 00:26:29.473341 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:26:29.641472 sshd[5453]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:29.658322 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:33662.service: Deactivated successfully. Jan 24 00:26:29.662426 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:26:29.664153 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:26:29.665705 systemd-logind[1443]: Removed session 12. Jan 24 00:26:34.689284 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:39244.service - OpenSSH per-connection server daemon (10.0.0.1:39244). Jan 24 00:26:34.756541 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 39244 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:34.772247 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:34.779888 systemd-logind[1443]: New session 13 of user core. Jan 24 00:26:34.787929 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:26:35.039507 sshd[5475]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:35.050157 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:39244.service: Deactivated successfully. Jan 24 00:26:35.054833 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:26:35.058295 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:26:35.061075 systemd-logind[1443]: Removed session 13. Jan 24 00:26:35.338894 containerd[1460]: time="2026-01-24T00:26:35.338781831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:26:35.434514 containerd[1460]: time="2026-01-24T00:26:35.434360458Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:35.440498 containerd[1460]: time="2026-01-24T00:26:35.436500488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:26:35.440498 containerd[1460]: time="2026-01-24T00:26:35.436747147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:26:35.440847 kubelet[2571]: E0124 00:26:35.437002 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:26:35.440847 kubelet[2571]: E0124 00:26:35.437067 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:26:35.440847 kubelet[2571]: E0124 00:26:35.437288 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7b482aad45a047b28315ef7e942c8a90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:35.443901 containerd[1460]: time="2026-01-24T00:26:35.443087850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:26:35.518044 containerd[1460]: time="2026-01-24T00:26:35.517870836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:35.521244 containerd[1460]: time="2026-01-24T00:26:35.521167339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:26:35.521764 containerd[1460]: time="2026-01-24T00:26:35.521228439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:26:35.522426 kubelet[2571]: E0124 00:26:35.521874 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:26:35.522426 kubelet[2571]: E0124 00:26:35.521976 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:26:35.522426 kubelet[2571]: E0124 00:26:35.522221 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:35.524323 kubelet[2571]: E0124 00:26:35.524224 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:26:36.341772 containerd[1460]: time="2026-01-24T00:26:36.341717676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:26:36.423071 containerd[1460]: time="2026-01-24T00:26:36.422950471Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:36.424929 containerd[1460]: time="2026-01-24T00:26:36.424824872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:26:36.425038 containerd[1460]: time="2026-01-24T00:26:36.424871048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:26:36.425392 kubelet[2571]: E0124 00:26:36.425279 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:36.425483 kubelet[2571]: E0124 00:26:36.425392 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:36.425771 kubelet[2571]: E0124 00:26:36.425716 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jz9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4dlrd_calico-apiserver(40ee8f0f-9c75-4f11-bb2e-9eb000639316): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:36.427208 kubelet[2571]: E0124 00:26:36.427104 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:26:37.339012 containerd[1460]: time="2026-01-24T00:26:37.338741717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:26:37.459222 containerd[1460]: time="2026-01-24T00:26:37.459079433Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:37.461156 containerd[1460]: time="2026-01-24T00:26:37.460864543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:26:37.461156 containerd[1460]: time="2026-01-24T00:26:37.460923516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:26:37.461411 kubelet[2571]: E0124 00:26:37.461292 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:26:37.461411 kubelet[2571]: E0124 00:26:37.461365 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:26:37.462187 kubelet[2571]: E0124 00:26:37.461699 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:37.464990 containerd[1460]: time="2026-01-24T00:26:37.464461395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:26:37.550511 containerd[1460]: time="2026-01-24T00:26:37.550407922Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:37.552272 containerd[1460]: time="2026-01-24T00:26:37.552176307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:26:37.552379 containerd[1460]: time="2026-01-24T00:26:37.552286101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:26:37.552601 kubelet[2571]: E0124 00:26:37.552491 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:26:37.552702 kubelet[2571]: E0124 00:26:37.552651 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:26:37.552914 kubelet[2571]: E0124 00:26:37.552814 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:37.554357 kubelet[2571]: E0124 00:26:37.554202 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:26:38.339844 containerd[1460]: time="2026-01-24T00:26:38.339779967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:26:38.423377 containerd[1460]: time="2026-01-24T00:26:38.423272033Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:38.425804 containerd[1460]: time="2026-01-24T00:26:38.425643048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:26:38.425978 containerd[1460]: time="2026-01-24T00:26:38.425843391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:26:38.426251 kubelet[2571]: E0124 00:26:38.426158 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:26:38.426251 kubelet[2571]: E0124 00:26:38.426229 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:26:38.428886 kubelet[2571]: E0124 00:26:38.426392 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-524nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f664d4f9c-5l5qb_calico-system(2b7e3139-1ac0-464d-91ba-3ef9871bf348): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:38.428886 kubelet[2571]: E0124 00:26:38.427624 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:26:39.339971 containerd[1460]: time="2026-01-24T00:26:39.339862583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:26:39.422231 containerd[1460]: time="2026-01-24T00:26:39.422061690Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:39.431198 containerd[1460]: time="2026-01-24T00:26:39.431064560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:26:39.431349 containerd[1460]: time="2026-01-24T00:26:39.431274360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:26:39.431530 kubelet[2571]: E0124 00:26:39.431437 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:26:39.431530 kubelet[2571]: E0124 00:26:39.431521 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:26:39.432034 kubelet[2571]: E0124 00:26:39.431954 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95qkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jfbl5_calico-system(3c957499-b83a-4ee9-8faf-8cc8bcb63fe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:39.432191 containerd[1460]: time="2026-01-24T00:26:39.431950656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:26:39.433994 kubelet[2571]: E0124 00:26:39.433903 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:26:39.501876 containerd[1460]: time="2026-01-24T00:26:39.501774357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:26:39.503310 containerd[1460]: time="2026-01-24T00:26:39.503256586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:26:39.503496 containerd[1460]: time="2026-01-24T00:26:39.503355803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:26:39.503943 kubelet[2571]: E0124 00:26:39.503878 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:39.504029 kubelet[2571]: E0124 00:26:39.503944 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:26:39.504299 kubelet[2571]: E0124 00:26:39.504108 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4kf4b_calico-apiserver(c9124158-0f90-4bb6-8fd8-7f63bd272b78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:26:39.506439 kubelet[2571]: E0124 00:26:39.506380 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:26:40.054791 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:39252.service - OpenSSH per-connection server daemon (10.0.0.1:39252). Jan 24 00:26:40.316812 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 39252 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:40.319176 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:40.326948 systemd-logind[1443]: New session 14 of user core. Jan 24 00:26:40.341391 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:26:40.523856 sshd[5492]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:40.537298 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:39252.service: Deactivated successfully. Jan 24 00:26:40.540447 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:26:40.542696 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:26:40.585177 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:39254.service - OpenSSH per-connection server daemon (10.0.0.1:39254). Jan 24 00:26:40.592115 systemd-logind[1443]: Removed session 14. Jan 24 00:26:40.633368 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 39254 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:40.636378 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:40.644398 systemd-logind[1443]: New session 15 of user core. Jan 24 00:26:40.652016 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:26:40.872743 sshd[5507]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:40.884834 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:39254.service: Deactivated successfully. Jan 24 00:26:40.888243 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:26:40.891683 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:26:40.903792 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:39266.service - OpenSSH per-connection server daemon (10.0.0.1:39266). Jan 24 00:26:40.909196 systemd-logind[1443]: Removed session 15. Jan 24 00:26:40.961881 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 39266 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:40.964755 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:40.973098 systemd-logind[1443]: New session 16 of user core. Jan 24 00:26:40.983829 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:26:41.139716 sshd[5519]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:41.147727 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:39266.service: Deactivated successfully. Jan 24 00:26:41.151265 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:26:41.166195 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:26:41.168030 systemd-logind[1443]: Removed session 16. Jan 24 00:26:46.159934 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:48226.service - OpenSSH per-connection server daemon (10.0.0.1:48226). Jan 24 00:26:46.235945 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 48226 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:46.242508 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:46.276513 systemd-logind[1443]: New session 17 of user core. Jan 24 00:26:46.292317 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:26:46.348636 kubelet[2571]: E0124 00:26:46.347118 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:26:46.592458 sshd[5533]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:46.637847 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:48226.service: Deactivated successfully. Jan 24 00:26:46.645424 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:26:46.652730 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:26:46.658155 systemd-logind[1443]: Removed session 17. Jan 24 00:26:47.339756 kubelet[2571]: E0124 00:26:47.339297 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:26:48.372010 kubelet[2571]: E0124 00:26:48.360947 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:50.374169 kubelet[2571]: E0124 00:26:50.374047 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:26:51.345125 kubelet[2571]: E0124 00:26:51.342767 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:51.350325 kubelet[2571]: E0124 00:26:51.350103 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:26:51.646298 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:48228.service - OpenSSH per-connection server daemon (10.0.0.1:48228). Jan 24 00:26:51.728698 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 48228 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:51.733854 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:51.754657 systemd-logind[1443]: New session 18 of user core. Jan 24 00:26:51.774932 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:26:52.021869 sshd[5572]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:52.028795 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:48228.service: Deactivated successfully. Jan 24 00:26:52.032048 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:26:52.033883 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:26:52.036048 systemd-logind[1443]: Removed session 18. Jan 24 00:26:53.339997 kubelet[2571]: E0124 00:26:53.339011 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:26:54.344040 kubelet[2571]: E0124 00:26:54.342342 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:26:56.349452 kubelet[2571]: E0124 00:26:56.347938 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:26:57.066339 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). Jan 24 00:26:57.184626 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:26:57.191743 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:26:57.215075 systemd-logind[1443]: New session 19 of user core. Jan 24 00:26:57.226951 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:26:57.579859 sshd[5587]: pam_unix(sshd:session): session closed for user core Jan 24 00:26:57.601637 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:42440.service: Deactivated successfully. Jan 24 00:26:57.612082 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:26:57.618108 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:26:57.625852 systemd-logind[1443]: Removed session 19. Jan 24 00:27:01.345170 kubelet[2571]: E0124 00:27:01.344172 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:27:01.348286 kubelet[2571]: E0124 00:27:01.347982 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:27:02.617935 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:42450.service - OpenSSH per-connection server daemon (10.0.0.1:42450). Jan 24 00:27:02.723952 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 42450 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:02.725070 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:02.739711 systemd-logind[1443]: New session 20 of user core. Jan 24 00:27:02.752925 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:27:03.109788 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:03.140872 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:42450.service: Deactivated successfully. Jan 24 00:27:03.157207 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:27:03.163878 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:27:03.166142 systemd-logind[1443]: Removed session 20. Jan 24 00:27:04.374466 kubelet[2571]: E0124 00:27:04.374401 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:27:05.348145 kubelet[2571]: E0124 00:27:05.339922 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:27:06.354545 kubelet[2571]: E0124 00:27:06.349702 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:27:06.368970 kubelet[2571]: E0124 00:27:06.367735 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:27:07.342818 kubelet[2571]: E0124 00:27:07.341129 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:27:08.241015 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:51042.service - OpenSSH per-connection server daemon (10.0.0.1:51042). Jan 24 00:27:08.378458 kubelet[2571]: E0124 00:27:08.377115 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:27:08.540408 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 51042 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:08.564932 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:08.602720 systemd-logind[1443]: New session 21 of user core. Jan 24 00:27:08.640637 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:27:09.276939 sshd[5615]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:09.291407 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:51042.service: Deactivated successfully. Jan 24 00:27:09.304790 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:27:09.324643 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:27:09.328289 systemd-logind[1443]: Removed session 21. Jan 24 00:27:14.127665 kubelet[2571]: E0124 00:27:14.126280 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:27:14.483752 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:51050.service - OpenSSH per-connection server daemon (10.0.0.1:51050). Jan 24 00:27:14.672554 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 51050 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:14.680622 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:14.702730 systemd-logind[1443]: New session 22 of user core. Jan 24 00:27:14.713678 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:27:15.079016 sshd[5638]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:15.098092 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:51050.service: Deactivated successfully. Jan 24 00:27:15.106947 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:27:15.117273 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:27:15.124102 systemd-logind[1443]: Removed session 22. Jan 24 00:27:16.350985 containerd[1460]: time="2026-01-24T00:27:16.350315758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:27:16.352212 kubelet[2571]: E0124 00:27:16.350821 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:27:16.452683 containerd[1460]: time="2026-01-24T00:27:16.449320638Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:16.463480 containerd[1460]: time="2026-01-24T00:27:16.463161777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:27:16.466073 containerd[1460]: time="2026-01-24T00:27:16.465902766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:27:16.470214 kubelet[2571]: E0124 00:27:16.470055 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:27:16.470214 kubelet[2571]: E0124 00:27:16.470179 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:27:16.473658 kubelet[2571]: E0124 00:27:16.470653 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7b482aad45a047b28315ef7e942c8a90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:16.479283 containerd[1460]: time="2026-01-24T00:27:16.477817558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:27:16.580125 containerd[1460]: time="2026-01-24T00:27:16.579757013Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:16.593462 containerd[1460]: time="2026-01-24T00:27:16.590075607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:27:16.593462 containerd[1460]: time="2026-01-24T00:27:16.590240324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:27:16.593784 kubelet[2571]: E0124 00:27:16.590493 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:27:16.593784 kubelet[2571]: E0124 00:27:16.590629 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:27:16.593784 kubelet[2571]: E0124 00:27:16.590785 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9x7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69947d5585-lnx9f_calico-system(824df689-3a42-4e89-bcb7-c81811fd2fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:16.593784 kubelet[2571]: E0124 00:27:16.592515 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:27:17.347867 kubelet[2571]: E0124 00:27:17.341162 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:27:18.345671 kubelet[2571]: E0124 00:27:18.344166 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:27:19.698227 containerd[1460]: time="2026-01-24T00:27:19.692052064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:27:19.853339 containerd[1460]: time="2026-01-24T00:27:19.852780193Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:19.870226 containerd[1460]: time="2026-01-24T00:27:19.870139083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:27:19.870930 containerd[1460]: time="2026-01-24T00:27:19.870541824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:27:19.872532 kubelet[2571]: E0124 00:27:19.871270 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:27:19.872532 kubelet[2571]: E0124 00:27:19.871408 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:27:19.872532 kubelet[2571]: E0124 00:27:19.871733 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4kf4b_calico-apiserver(c9124158-0f90-4bb6-8fd8-7f63bd272b78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:19.876514 kubelet[2571]: E0124 00:27:19.874274 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:27:20.146988 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:58526.service - OpenSSH per-connection server daemon (10.0.0.1:58526). Jan 24 00:27:20.283526 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 58526 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:20.305897 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:20.325249 systemd-logind[1443]: New session 23 of user core. Jan 24 00:27:20.349973 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:27:20.723737 sshd[5661]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:20.731232 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:58526.service: Deactivated successfully. Jan 24 00:27:20.737987 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:27:20.746455 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:27:20.750303 systemd-logind[1443]: Removed session 23. Jan 24 00:27:25.752232 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:43416.service - OpenSSH per-connection server daemon (10.0.0.1:43416). Jan 24 00:27:25.851225 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 43416 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:25.855159 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:25.866345 systemd-logind[1443]: New session 24 of user core. Jan 24 00:27:25.880072 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:27:26.067178 sshd[5701]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:26.071996 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:43416.service: Deactivated successfully. Jan 24 00:27:26.075097 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:27:26.078245 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:27:26.080160 systemd-logind[1443]: Removed session 24. Jan 24 00:27:26.354121 containerd[1460]: time="2026-01-24T00:27:26.342185666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:27:26.452069 containerd[1460]: time="2026-01-24T00:27:26.451616895Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:26.454121 containerd[1460]: time="2026-01-24T00:27:26.454007227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:27:26.454261 containerd[1460]: time="2026-01-24T00:27:26.454201970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:27:26.456511 kubelet[2571]: E0124 00:27:26.456215 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:27:26.456511 kubelet[2571]: E0124 00:27:26.456293 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:27:26.457353 kubelet[2571]: E0124 00:27:26.456691 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jz9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff5668969-4dlrd_calico-apiserver(40ee8f0f-9c75-4f11-bb2e-9eb000639316): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:26.459219 kubelet[2571]: E0124 00:27:26.458693 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:27:29.355193 kubelet[2571]: E0124 00:27:29.355112 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:27:31.126914 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:43428.service - OpenSSH per-connection server daemon (10.0.0.1:43428). Jan 24 00:27:31.198234 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 43428 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:31.199238 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:31.210777 systemd-logind[1443]: New session 25 of user core. Jan 24 00:27:31.217003 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:27:31.343979 containerd[1460]: time="2026-01-24T00:27:31.343517967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:27:31.432253 containerd[1460]: time="2026-01-24T00:27:31.432080493Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:31.438649 containerd[1460]: time="2026-01-24T00:27:31.438134820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:27:31.439268 containerd[1460]: time="2026-01-24T00:27:31.439176932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:27:31.440895 kubelet[2571]: E0124 00:27:31.440664 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:27:31.440895 kubelet[2571]: E0124 00:27:31.440766 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:27:31.446194 kubelet[2571]: E0124 00:27:31.441225 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-524nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f664d4f9c-5l5qb_calico-system(2b7e3139-1ac0-464d-91ba-3ef9871bf348): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:31.446194 kubelet[2571]: E0124 00:27:31.443385 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:27:31.450836 containerd[1460]: time="2026-01-24T00:27:31.441768374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:27:31.452732 sshd[5736]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:31.458796 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:43428.service: Deactivated successfully. Jan 24 00:27:31.467646 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:27:31.474104 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:27:31.483738 systemd-logind[1443]: Removed session 25. Jan 24 00:27:31.512646 containerd[1460]: time="2026-01-24T00:27:31.511784510Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:31.522309 containerd[1460]: time="2026-01-24T00:27:31.522057463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:27:31.522309 containerd[1460]: time="2026-01-24T00:27:31.522202294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:27:31.523083 kubelet[2571]: E0124 00:27:31.522816 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:27:31.523083 kubelet[2571]: E0124 00:27:31.522943 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:27:31.526699 containerd[1460]: time="2026-01-24T00:27:31.523412379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:27:31.526867 kubelet[2571]: E0124 00:27:31.526370 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:31.654473 containerd[1460]: time="2026-01-24T00:27:31.654276450Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:31.658932 containerd[1460]: time="2026-01-24T00:27:31.658485898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:27:31.658932 containerd[1460]: time="2026-01-24T00:27:31.658650303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:27:31.659210 kubelet[2571]: E0124 00:27:31.659095 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:27:31.659210 kubelet[2571]: E0124 00:27:31.659190 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:27:31.665353 kubelet[2571]: E0124 00:27:31.660867 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95qkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jfbl5_calico-system(3c957499-b83a-4ee9-8faf-8cc8bcb63fe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:31.665353 kubelet[2571]: E0124 00:27:31.662095 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:27:31.666931 containerd[1460]: time="2026-01-24T00:27:31.664958737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:27:31.749906 containerd[1460]: time="2026-01-24T00:27:31.749406748Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:27:31.772637 containerd[1460]: time="2026-01-24T00:27:31.765783553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:27:31.772637 containerd[1460]: time="2026-01-24T00:27:31.765925839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:27:31.772905 kubelet[2571]: E0124 00:27:31.766519 2571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:27:31.772905 kubelet[2571]: E0124 00:27:31.766638 2571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:27:31.772905 kubelet[2571]: E0124 00:27:31.766797 2571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-grfd7_calico-system(677a3c6a-a428-4746-be4d-2080a36b4930): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:27:31.773909 kubelet[2571]: E0124 00:27:31.773440 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:27:35.349807 kubelet[2571]: E0124 00:27:35.349735 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:27:36.513946 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:52844.service - OpenSSH per-connection server daemon (10.0.0.1:52844). Jan 24 00:27:36.624373 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:36.628946 sshd[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:36.649250 systemd-logind[1443]: New session 26 of user core. Jan 24 00:27:36.664532 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:27:37.089369 sshd[5750]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:37.111333 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:52844.service: Deactivated successfully. Jan 24 00:27:37.119120 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:27:37.123170 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:27:37.129214 systemd-logind[1443]: Removed session 26. Jan 24 00:27:39.397925 kubelet[2571]: E0124 00:27:39.396777 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:27:40.973626 kubelet[2571]: E0124 00:27:40.970807 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:27:42.230651 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:52860.service - OpenSSH per-connection server daemon (10.0.0.1:52860). Jan 24 00:27:42.499210 kubelet[2571]: E0124 00:27:42.497768 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:27:42.549756 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 52860 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:42.552972 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:42.581103 systemd-logind[1443]: New session 27 of user core. Jan 24 00:27:42.612652 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:27:42.972783 sshd[5765]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:42.982350 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:52860.service: Deactivated successfully. Jan 24 00:27:42.988895 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:27:42.992301 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:27:43.002882 systemd-logind[1443]: Removed session 27. Jan 24 00:27:44.466250 kubelet[2571]: E0124 00:27:44.465225 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:27:44.483953 kubelet[2571]: E0124 00:27:44.469377 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:27:45.363036 kubelet[2571]: E0124 00:27:45.362225 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:27:45.366918 kubelet[2571]: E0124 00:27:45.366758 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:27:48.028252 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:37508.service - OpenSSH per-connection server daemon (10.0.0.1:37508). Jan 24 00:27:48.333713 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 37508 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:48.367834 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:48.387780 systemd-logind[1443]: New session 28 of user core. Jan 24 00:27:48.404025 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 00:27:51.010828 kubelet[2571]: E0124 00:27:51.010678 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:27:51.028686 kubelet[2571]: E0124 00:27:51.028145 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:27:51.040400 kubelet[2571]: E0124 00:27:51.040155 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:27:51.103206 sshd[5779]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:51.127891 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:37508.service: Deactivated successfully. Jan 24 00:27:51.139722 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 00:27:51.140336 systemd[1]: session-28.scope: Consumed 2.501s CPU time. Jan 24 00:27:51.143023 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Jan 24 00:27:51.171363 systemd[1]: Started sshd@28-10.0.0.16:22-10.0.0.1:37520.service - OpenSSH per-connection server daemon (10.0.0.1:37520). Jan 24 00:27:51.174077 systemd-logind[1443]: Removed session 28. Jan 24 00:27:51.274939 sshd[5812]: Accepted publickey for core from 10.0.0.1 port 37520 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:51.275947 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:51.291878 systemd-logind[1443]: New session 29 of user core. Jan 24 00:27:51.307318 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 00:27:52.282448 sshd[5812]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:52.300704 systemd[1]: sshd@28-10.0.0.16:22-10.0.0.1:37520.service: Deactivated successfully. Jan 24 00:27:52.306469 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 00:27:52.312884 systemd-logind[1443]: Session 29 logged out. Waiting for processes to exit. Jan 24 00:27:52.329293 systemd[1]: Started sshd@29-10.0.0.16:22-10.0.0.1:37530.service - OpenSSH per-connection server daemon (10.0.0.1:37530). Jan 24 00:27:52.331144 systemd-logind[1443]: Removed session 29. Jan 24 00:27:52.400959 sshd[5832]: Accepted publickey for core from 10.0.0.1 port 37530 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:52.404967 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:52.414169 systemd-logind[1443]: New session 30 of user core. Jan 24 00:27:52.422939 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 00:27:53.342072 kubelet[2571]: E0124 00:27:53.341879 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:27:53.433974 sshd[5832]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:53.453417 systemd[1]: sshd@29-10.0.0.16:22-10.0.0.1:37530.service: Deactivated successfully. Jan 24 00:27:53.458842 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 00:27:53.463384 systemd-logind[1443]: Session 30 logged out. Waiting for processes to exit. Jan 24 00:27:53.477665 systemd[1]: Started sshd@30-10.0.0.16:22-10.0.0.1:37532.service - OpenSSH per-connection server daemon (10.0.0.1:37532). Jan 24 00:27:53.481991 systemd-logind[1443]: Removed session 30. Jan 24 00:27:53.548983 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 37532 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:53.550761 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:53.560000 systemd-logind[1443]: New session 31 of user core. Jan 24 00:27:53.608069 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 24 00:27:53.972938 sshd[5855]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:53.985908 systemd[1]: sshd@30-10.0.0.16:22-10.0.0.1:37532.service: Deactivated successfully. Jan 24 00:27:53.993125 systemd[1]: session-31.scope: Deactivated successfully. Jan 24 00:27:54.000461 systemd-logind[1443]: Session 31 logged out. Waiting for processes to exit. Jan 24 00:27:54.014442 systemd[1]: Started sshd@31-10.0.0.16:22-10.0.0.1:37536.service - OpenSSH per-connection server daemon (10.0.0.1:37536). Jan 24 00:27:54.017213 systemd-logind[1443]: Removed session 31. Jan 24 00:27:54.070172 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 37536 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:54.073005 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:54.081179 systemd-logind[1443]: New session 32 of user core. Jan 24 00:27:54.091934 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 24 00:27:54.284404 sshd[5870]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:54.291711 systemd[1]: sshd@31-10.0.0.16:22-10.0.0.1:37536.service: Deactivated successfully. Jan 24 00:27:54.294524 systemd[1]: session-32.scope: Deactivated successfully. Jan 24 00:27:54.298387 systemd-logind[1443]: Session 32 logged out. Waiting for processes to exit. Jan 24 00:27:54.301013 systemd-logind[1443]: Removed session 32. Jan 24 00:27:57.339262 kubelet[2571]: E0124 00:27:57.339087 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f664d4f9c-5l5qb" podUID="2b7e3139-1ac0-464d-91ba-3ef9871bf348" Jan 24 00:27:57.341001 kubelet[2571]: E0124 00:27:57.340939 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:27:58.348642 kubelet[2571]: E0124 00:27:58.346980 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930" Jan 24 00:27:59.315254 systemd[1]: Started sshd@32-10.0.0.16:22-10.0.0.1:53072.service - OpenSSH per-connection server daemon (10.0.0.1:53072). Jan 24 00:27:59.339946 kubelet[2571]: E0124 00:27:59.339342 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:27:59.358526 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 53072 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:27:59.361992 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:27:59.376725 systemd-logind[1443]: New session 33 of user core. Jan 24 00:27:59.379875 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 24 00:27:59.587351 sshd[5886]: pam_unix(sshd:session): session closed for user core Jan 24 00:27:59.592290 systemd[1]: sshd@32-10.0.0.16:22-10.0.0.1:53072.service: Deactivated successfully. Jan 24 00:27:59.597352 systemd[1]: session-33.scope: Deactivated successfully. Jan 24 00:27:59.603657 systemd-logind[1443]: Session 33 logged out. Waiting for processes to exit. Jan 24 00:27:59.606709 systemd-logind[1443]: Removed session 33. Jan 24 00:28:04.611006 systemd[1]: Started sshd@33-10.0.0.16:22-10.0.0.1:50754.service - OpenSSH per-connection server daemon (10.0.0.1:50754). Jan 24 00:28:04.675308 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 50754 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:28:04.677522 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:04.684812 systemd-logind[1443]: New session 34 of user core. Jan 24 00:28:04.696075 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 24 00:28:04.865548 sshd[5902]: pam_unix(sshd:session): session closed for user core Jan 24 00:28:04.871789 systemd[1]: sshd@33-10.0.0.16:22-10.0.0.1:50754.service: Deactivated successfully. Jan 24 00:28:04.875735 systemd[1]: session-34.scope: Deactivated successfully. Jan 24 00:28:04.878844 systemd-logind[1443]: Session 34 logged out. Waiting for processes to exit. Jan 24 00:28:04.880737 systemd-logind[1443]: Removed session 34. Jan 24 00:28:05.338813 kubelet[2571]: E0124 00:28:05.338273 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4kf4b" podUID="c9124158-0f90-4bb6-8fd8-7f63bd272b78" Jan 24 00:28:08.345009 kubelet[2571]: E0124 00:28:08.344445 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff5668969-4dlrd" podUID="40ee8f0f-9c75-4f11-bb2e-9eb000639316" Jan 24 00:28:09.341166 kubelet[2571]: E0124 00:28:09.341001 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69947d5585-lnx9f" podUID="824df689-3a42-4e89-bcb7-c81811fd2fd8" Jan 24 00:28:09.885802 systemd[1]: Started sshd@34-10.0.0.16:22-10.0.0.1:50762.service - OpenSSH per-connection server daemon (10.0.0.1:50762). Jan 24 00:28:09.942898 sshd[5917]: Accepted publickey for core from 10.0.0.1 port 50762 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:28:09.945531 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:09.952996 systemd-logind[1443]: New session 35 of user core. Jan 24 00:28:09.963928 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 24 00:28:10.136087 sshd[5917]: pam_unix(sshd:session): session closed for user core Jan 24 00:28:10.143522 systemd-logind[1443]: Session 35 logged out. Waiting for processes to exit. Jan 24 00:28:10.144161 systemd[1]: sshd@34-10.0.0.16:22-10.0.0.1:50762.service: Deactivated successfully. Jan 24 00:28:10.147769 systemd[1]: session-35.scope: Deactivated successfully. Jan 24 00:28:10.150284 systemd-logind[1443]: Removed session 35. Jan 24 00:28:10.339536 kubelet[2571]: E0124 00:28:10.339128 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfbl5" podUID="3c957499-b83a-4ee9-8faf-8cc8bcb63fe3" Jan 24 00:28:11.337648 kubelet[2571]: E0124 00:28:11.337484 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:28:11.339515 kubelet[2571]: E0124 00:28:11.339416 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-grfd7" podUID="677a3c6a-a428-4746-be4d-2080a36b4930"