Jan 24 00:28:50.141100 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:28:50.141125 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:28:50.141137 kernel: BIOS-provided physical RAM map: Jan 24 00:28:50.141143 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:28:50.141148 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 24 00:28:50.141154 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 24 00:28:50.141160 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 24 00:28:50.141166 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 24 00:28:50.141171 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 24 00:28:50.141177 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 24 00:28:50.141186 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 24 00:28:50.141192 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 24 00:28:50.141198 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 24 00:28:50.141203 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 24 00:28:50.141210 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 24 00:28:50.141217 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 24 00:28:50.141225 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 24 00:28:50.141231 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 24 00:28:50.141237 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 24 00:28:50.141243 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:28:50.141249 kernel: NX (Execute Disable) protection: active Jan 24 00:28:50.141255 kernel: APIC: Static calls initialized Jan 24 00:28:50.141261 kernel: efi: EFI v2.7 by EDK II Jan 24 00:28:50.141267 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 24 00:28:50.141273 kernel: SMBIOS 2.8 present. Jan 24 00:28:50.141278 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 24 00:28:50.141284 kernel: Hypervisor detected: KVM Jan 24 00:28:50.141292 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:28:50.141298 kernel: kvm-clock: using sched offset of 5273386291 cycles Jan 24 00:28:50.141305 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:28:50.141311 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:28:50.141318 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:28:50.141324 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:28:50.141331 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 24 00:28:50.141337 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:28:50.141344 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:28:50.141353 kernel: Using GB pages for direct mapping Jan 24 00:28:50.141360 kernel: Secure boot disabled Jan 24 00:28:50.141366 kernel: ACPI: Early table checksum verification disabled Jan 24 00:28:50.141372 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 24 00:28:50.141402 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:28:50.141409 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:50.141416 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:50.141425 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 24 00:28:50.141432 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:50.141439 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:50.141445 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:50.141468 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:50.141474 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:28:50.141481 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 24 00:28:50.141491 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 24 00:28:50.141498 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 24 00:28:50.141504 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 24 00:28:50.141510 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 24 00:28:50.141517 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 24 00:28:50.141524 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 24 00:28:50.141530 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 24 00:28:50.141537 kernel: No NUMA configuration found Jan 24 00:28:50.141543 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 24 00:28:50.141553 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 24 00:28:50.141559 kernel: Zone ranges: Jan 24 00:28:50.141566 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:28:50.141573 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 24 00:28:50.141579 kernel: Normal empty Jan 24 00:28:50.141625 kernel: Movable zone start for each node Jan 24 00:28:50.141631 kernel: Early memory node ranges Jan 24 00:28:50.141638 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:28:50.141644 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 24 00:28:50.141650 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 24 00:28:50.141661 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 24 00:28:50.141667 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 24 00:28:50.141673 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 24 00:28:50.141680 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 24 00:28:50.141687 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:28:50.141693 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:28:50.141700 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 24 00:28:50.141706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:28:50.141712 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 24 00:28:50.141722 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:28:50.141728 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 24 00:28:50.141735 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:28:50.141741 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:28:50.141748 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:28:50.141754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:28:50.141760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:28:50.141767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:28:50.141773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:28:50.141783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:28:50.141789 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:28:50.141796 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:28:50.141802 kernel: TSC deadline timer available Jan 24 00:28:50.141809 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:28:50.141815 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:28:50.141822 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:28:50.141828 kernel: kvm-guest: setup PV sched yield Jan 24 00:28:50.141835 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 24 00:28:50.141841 kernel: Booting paravirtualized kernel on KVM Jan 24 00:28:50.141851 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:28:50.141858 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:28:50.141864 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:28:50.141871 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:28:50.141877 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:28:50.141884 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:28:50.141890 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:28:50.141898 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:28:50.141907 kernel: random: crng init done Jan 24 00:28:50.141914 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:28:50.141920 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:28:50.141927 kernel: Fallback order for Node 0: 0 Jan 24 00:28:50.141962 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 24 00:28:50.141968 kernel: Policy zone: DMA32 Jan 24 00:28:50.141975 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:28:50.141982 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 24 00:28:50.141989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:28:50.141999 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:28:50.142006 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:28:50.142012 kernel: Dynamic Preempt: voluntary Jan 24 00:28:50.142018 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:28:50.142035 kernel: rcu: RCU event tracing is enabled. Jan 24 00:28:50.142045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:28:50.142052 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:28:50.142059 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:28:50.142065 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:28:50.142072 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:28:50.142079 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:28:50.142086 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:28:50.142095 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:28:50.142102 kernel: Console: colour dummy device 80x25 Jan 24 00:28:50.142108 kernel: printk: console [ttyS0] enabled Jan 24 00:28:50.142115 kernel: ACPI: Core revision 20230628 Jan 24 00:28:50.142122 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:28:50.142132 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:28:50.142139 kernel: x2apic enabled Jan 24 00:28:50.142145 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:28:50.142152 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:28:50.142159 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:28:50.142166 kernel: kvm-guest: setup PV IPIs Jan 24 00:28:50.142172 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:28:50.142179 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:28:50.142185 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:28:50.142195 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:28:50.142201 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:28:50.142208 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:28:50.142214 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:28:50.142221 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:28:50.142228 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:28:50.142234 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:28:50.142241 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:28:50.142251 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:28:50.142257 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:28:50.142264 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:28:50.142271 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:28:50.142277 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:28:50.142284 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:28:50.142291 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:28:50.142298 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:28:50.142304 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:28:50.142314 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:28:50.142321 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:28:50.142327 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:28:50.142334 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:28:50.142340 kernel: landlock: Up and running. Jan 24 00:28:50.142347 kernel: SELinux: Initializing. Jan 24 00:28:50.142353 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:28:50.142360 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:28:50.142367 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:28:50.142376 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:28:50.142383 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:28:50.142389 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:28:50.142396 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:28:50.142403 kernel: signal: max sigframe size: 1776 Jan 24 00:28:50.142409 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:28:50.142416 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:28:50.142422 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:28:50.142429 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:28:50.142438 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:28:50.142444 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:28:50.142451 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:28:50.142457 kernel: smpboot: Max logical packages: 1 Jan 24 00:28:50.142464 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:28:50.142471 kernel: devtmpfs: initialized Jan 24 00:28:50.142477 kernel: x86/mm: Memory block size: 128MB Jan 24 00:28:50.142484 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 24 00:28:50.142490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 24 00:28:50.142500 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 24 00:28:50.142507 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 24 00:28:50.142514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 24 00:28:50.142520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:28:50.142527 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:28:50.142534 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:28:50.142540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:28:50.142547 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:28:50.142554 kernel: audit: type=2000 audit(1769214528.249:1): state=initialized audit_enabled=0 res=1 Jan 24 00:28:50.142563 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:28:50.142569 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:28:50.142576 kernel: cpuidle: using governor menu Jan 24 00:28:50.142631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:28:50.142639 kernel: dca service started, version 1.12.1 Jan 24 00:28:50.142645 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:28:50.142652 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:28:50.142659 kernel: PCI: Using configuration type 1 for base access Jan 24 00:28:50.142669 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:28:50.142676 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:28:50.142682 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:28:50.142689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:28:50.142695 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:28:50.142702 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:28:50.142709 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:28:50.142715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:28:50.142722 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:28:50.142731 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:28:50.142738 kernel: ACPI: Interpreter enabled Jan 24 00:28:50.142745 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:28:50.142751 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:28:50.142758 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:28:50.142765 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:28:50.142771 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:28:50.142778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:28:50.143011 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:28:50.143207 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:28:50.143431 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:28:50.143481 kernel: PCI host bridge to bus 0000:00 Jan 24 00:28:50.143673 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:28:50.143834 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:28:50.144042 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:28:50.144167 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:28:50.144277 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:28:50.144387 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 24 00:28:50.144496 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:28:50.144754 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:28:50.145035 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:28:50.145165 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 24 00:28:50.145293 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 24 00:28:50.145413 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:28:50.145531 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:28:50.145713 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:28:50.145848 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:28:50.146010 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 24 00:28:50.146135 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 24 00:28:50.146261 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 24 00:28:50.146398 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:28:50.146519 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 24 00:28:50.146712 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 24 00:28:50.146835 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 24 00:28:50.147004 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:28:50.147134 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 24 00:28:50.147257 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 24 00:28:50.147376 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 24 00:28:50.147495 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 24 00:28:50.147686 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:28:50.147811 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:28:50.147974 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:28:50.148106 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 24 00:28:50.148225 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 24 00:28:50.148352 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:28:50.148471 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 24 00:28:50.148481 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:28:50.148488 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:28:50.148495 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:28:50.148502 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:28:50.148512 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:28:50.148519 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:28:50.148525 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:28:50.148532 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:28:50.148539 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:28:50.148545 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:28:50.148552 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:28:50.148558 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:28:50.148565 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:28:50.148574 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:28:50.148704 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:28:50.148715 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:28:50.148722 kernel: iommu: Default domain type: Translated Jan 24 00:28:50.148728 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:28:50.148735 kernel: efivars: Registered efivars operations Jan 24 00:28:50.148742 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:28:50.148748 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:28:50.148755 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 24 00:28:50.148766 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 24 00:28:50.148772 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 24 00:28:50.148779 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 24 00:28:50.148909 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:28:50.149072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:28:50.149192 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:28:50.149202 kernel: vgaarb: loaded Jan 24 00:28:50.149209 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:28:50.149216 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:28:50.149226 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:28:50.149233 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:28:50.149240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:28:50.149247 kernel: pnp: PnP ACPI init Jan 24 00:28:50.149377 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:28:50.149388 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:28:50.149395 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:28:50.149402 kernel: NET: Registered PF_INET protocol family Jan 24 00:28:50.149412 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:28:50.149419 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:28:50.149426 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:28:50.149434 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:28:50.149440 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:28:50.149447 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:28:50.149454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:28:50.149460 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:28:50.149470 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:28:50.149476 kernel: NET: Registered PF_XDP protocol family Jan 24 00:28:50.149659 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 24 00:28:50.149786 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 24 00:28:50.149898 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:28:50.150047 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:28:50.150159 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:28:50.150268 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:28:50.150381 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:28:50.150490 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 24 00:28:50.150499 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:28:50.150506 kernel: Initialise system trusted keyrings Jan 24 00:28:50.150513 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:28:50.150520 kernel: Key type asymmetric registered Jan 24 00:28:50.150526 kernel: Asymmetric key parser 'x509' registered Jan 24 00:28:50.150533 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:28:50.150540 kernel: io scheduler mq-deadline registered Jan 24 00:28:50.150550 kernel: io scheduler kyber registered Jan 24 00:28:50.150557 kernel: io scheduler bfq registered Jan 24 00:28:50.150564 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:28:50.150571 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:28:50.150577 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:28:50.150630 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:28:50.150637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:28:50.150644 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:28:50.150650 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:28:50.150660 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:28:50.150667 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:28:50.150802 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:28:50.150813 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:28:50.150925 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:28:50.151076 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:28:49 UTC (1769214529) Jan 24 00:28:50.151189 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:28:50.151198 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:28:50.151209 kernel: efifb: probing for efifb Jan 24 00:28:50.151216 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 24 00:28:50.151223 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 24 00:28:50.151229 kernel: efifb: scrolling: redraw Jan 24 00:28:50.151236 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 24 00:28:50.151242 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:28:50.151249 kernel: fb0: EFI VGA frame buffer device Jan 24 00:28:50.151256 kernel: pstore: Using crash dump compression: deflate Jan 24 00:28:50.151263 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:28:50.151272 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:28:50.151278 kernel: Segment Routing with IPv6 Jan 24 00:28:50.151285 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:28:50.151292 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:28:50.151298 kernel: Key type dns_resolver registered Jan 24 00:28:50.151305 kernel: IPI shorthand broadcast: enabled Jan 24 00:28:50.151330 kernel: sched_clock: Marking stable (915024988, 337446764)->(1541532357, -289060605) Jan 24 00:28:50.151339 kernel: registered taskstats version 1 Jan 24 00:28:50.151346 kernel: Loading compiled-in X.509 certificates Jan 24 00:28:50.151356 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:28:50.151363 kernel: Key type .fscrypt registered Jan 24 00:28:50.151370 kernel: Key type fscrypt-provisioning registered Jan 24 00:28:50.151377 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:28:50.151384 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:28:50.151391 kernel: ima: No architecture policies found Jan 24 00:28:50.151398 kernel: clk: Disabling unused clocks Jan 24 00:28:50.151405 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:28:50.151412 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:28:50.151421 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:28:50.151428 kernel: Run /init as init process Jan 24 00:28:50.151435 kernel: with arguments: Jan 24 00:28:50.151442 kernel: /init Jan 24 00:28:50.151449 kernel: with environment: Jan 24 00:28:50.151455 kernel: HOME=/ Jan 24 00:28:50.151462 kernel: TERM=linux Jan 24 00:28:50.151472 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:28:50.151483 systemd[1]: Detected virtualization kvm. Jan 24 00:28:50.151491 systemd[1]: Detected architecture x86-64. Jan 24 00:28:50.151498 systemd[1]: Running in initrd. Jan 24 00:28:50.151505 systemd[1]: No hostname configured, using default hostname. Jan 24 00:28:50.151512 systemd[1]: Hostname set to . Jan 24 00:28:50.151519 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:28:50.151529 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:28:50.151539 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:28:50.151570 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:28:50.151578 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:28:50.151625 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:28:50.151633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:28:50.151644 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:28:50.151656 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:28:50.151663 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:28:50.151671 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:28:50.151678 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:28:50.151686 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:28:50.151693 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:28:50.151703 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:28:50.151710 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:28:50.151717 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:28:50.151725 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:28:50.151732 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:28:50.151740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:28:50.151748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:28:50.151755 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:28:50.151762 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:28:50.151772 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:28:50.151779 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:28:50.151787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:28:50.151794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:28:50.151801 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:28:50.151809 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:28:50.151816 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:28:50.151824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:50.151831 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:28:50.151841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:28:50.151872 systemd-journald[194]: Collecting audit messages is disabled. Jan 24 00:28:50.151892 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:28:50.151903 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:28:50.151911 systemd-journald[194]: Journal started Jan 24 00:28:50.151926 systemd-journald[194]: Runtime Journal (/run/log/journal/c424f75fba584d54b96a0e02f74b6b8d) is 6.0M, max 48.3M, 42.2M free. Jan 24 00:28:50.141511 systemd-modules-load[195]: Inserted module 'overlay' Jan 24 00:28:50.163843 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:28:50.164456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:50.168410 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:28:50.186640 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:28:50.189960 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 24 00:28:50.192731 kernel: Bridge firewalling registered Jan 24 00:28:50.195049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:28:50.199519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:28:50.203135 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:28:50.203890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:28:50.209208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:28:50.227809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:28:50.235451 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:28:50.243412 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:28:50.243885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:28:50.254897 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:28:50.261199 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:28:50.271571 dracut-cmdline[229]: dracut-dracut-053 Jan 24 00:28:50.275294 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:28:50.301182 systemd-resolved[233]: Positive Trust Anchors: Jan 24 00:28:50.301218 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:28:50.301244 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:28:50.304030 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 24 00:28:50.305344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:28:50.309309 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:28:50.377675 kernel: SCSI subsystem initialized Jan 24 00:28:50.387697 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:28:50.400649 kernel: iscsi: registered transport (tcp) Jan 24 00:28:50.423806 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:28:50.423905 kernel: QLogic iSCSI HBA Driver Jan 24 00:28:50.487104 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:28:50.500914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:28:50.536365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:28:50.536442 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:28:50.539210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:28:50.584717 kernel: raid6: avx2x4 gen() 32831 MB/s Jan 24 00:28:50.602684 kernel: raid6: avx2x2 gen() 28075 MB/s Jan 24 00:28:50.621796 kernel: raid6: avx2x1 gen() 25375 MB/s Jan 24 00:28:50.621874 kernel: raid6: using algorithm avx2x4 gen() 32831 MB/s Jan 24 00:28:50.641808 kernel: raid6: .... xor() 4252 MB/s, rmw enabled Jan 24 00:28:50.641925 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:28:50.663685 kernel: xor: automatically using best checksumming function avx Jan 24 00:28:50.953698 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:28:50.970688 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:28:50.982937 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:28:51.005338 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 24 00:28:51.013739 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:28:51.034900 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:28:51.057378 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jan 24 00:28:51.103991 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:28:51.130985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:28:51.235149 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:28:51.251793 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:28:51.271813 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:28:51.281661 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:28:51.292083 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:28:51.292274 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:28:51.294724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:28:51.304068 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:28:51.306294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:28:51.325794 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:28:51.325825 kernel: GPT:9289727 != 19775487 Jan 24 00:28:51.325835 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:28:51.325844 kernel: GPT:9289727 != 19775487 Jan 24 00:28:51.325853 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:28:51.325862 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:28:51.331149 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:28:51.344625 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:28:51.344660 kernel: AES CTR mode by8 optimization enabled Jan 24 00:28:51.344494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:28:51.344886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:28:51.357772 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:28:51.357918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:28:51.358214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:51.368562 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:51.394773 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (465) Jan 24 00:28:51.399413 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (479) Jan 24 00:28:51.398055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:51.407853 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:28:51.419017 kernel: libata version 3.00 loaded. Jan 24 00:28:51.425690 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:28:51.426018 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:28:51.435802 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:28:51.436306 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:28:51.441137 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:28:51.446398 kernel: scsi host0: ahci Jan 24 00:28:51.446712 kernel: scsi host1: ahci Jan 24 00:28:51.446869 kernel: scsi host2: ahci Jan 24 00:28:51.449071 kernel: scsi host3: ahci Jan 24 00:28:51.450463 kernel: scsi host4: ahci Jan 24 00:28:51.454869 kernel: scsi host5: ahci Jan 24 00:28:51.455095 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 24 00:28:51.455118 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 24 00:28:51.459801 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 24 00:28:51.462306 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 24 00:28:51.464785 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 24 00:28:51.467085 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 24 00:28:51.468360 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:28:51.483376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:28:51.493464 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:28:51.503379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:28:51.523848 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:28:51.532356 disk-uuid[569]: Primary Header is updated. Jan 24 00:28:51.532356 disk-uuid[569]: Secondary Entries is updated. Jan 24 00:28:51.532356 disk-uuid[569]: Secondary Header is updated. Jan 24 00:28:51.549262 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:28:51.549292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:28:51.532276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:28:51.532370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:51.539869 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:51.561991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:51.586086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:51.609783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:28:51.636554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:28:51.777694 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:51.782738 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:28:51.782783 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:51.783722 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:51.787716 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:51.787772 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:28:51.791658 kernel: ata3.00: applying bridge limits Jan 24 00:28:51.794710 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:51.794759 kernel: ata3.00: configured for UDMA/100 Jan 24 00:28:51.800635 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:28:51.846237 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:28:51.846902 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:28:51.863767 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:28:52.549689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:28:52.550424 disk-uuid[570]: The operation has completed successfully. Jan 24 00:28:52.592185 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:28:52.592399 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:28:52.610145 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:28:52.617377 sh[600]: Success Jan 24 00:28:52.638674 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:28:52.695276 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:28:52.712748 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:28:52.718185 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:28:52.737870 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:28:52.737928 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:52.737941 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:28:52.741308 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:28:52.743557 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:28:52.755058 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:28:52.755908 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:28:52.771847 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:28:52.777474 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:28:52.791714 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:28:52.791769 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:52.791789 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:28:52.798703 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:28:52.811023 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:28:52.816225 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:28:52.827504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:28:52.839819 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:28:52.902819 ignition[666]: Ignition 2.19.0 Jan 24 00:28:52.902860 ignition[666]: Stage: fetch-offline Jan 24 00:28:52.902927 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:52.902942 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:28:52.903117 ignition[666]: parsed url from cmdline: "" Jan 24 00:28:52.903124 ignition[666]: no config URL provided Jan 24 00:28:52.903133 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:28:52.903146 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:28:52.903184 ignition[666]: op(1): [started] loading QEMU firmware config module Jan 24 00:28:52.903192 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:28:52.913259 ignition[666]: op(1): [finished] loading QEMU firmware config module Jan 24 00:28:52.913284 ignition[666]: QEMU firmware config was not found. Ignoring... Jan 24 00:28:53.004108 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:28:53.026056 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:28:53.060837 systemd-networkd[787]: lo: Link UP Jan 24 00:28:53.060875 systemd-networkd[787]: lo: Gained carrier Jan 24 00:28:53.063056 systemd-networkd[787]: Enumeration completed Jan 24 00:28:53.064260 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:28:53.064266 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:28:53.064826 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:28:53.068793 systemd-networkd[787]: eth0: Link UP Jan 24 00:28:53.068799 systemd-networkd[787]: eth0: Gained carrier Jan 24 00:28:53.068812 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:28:53.071310 systemd[1]: Reached target network.target - Network. Jan 24 00:28:53.105730 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:28:53.134031 ignition[666]: parsing config with SHA512: 79e1e1f974c78300f509269f211d4a249bab6f213c6aa447a8582809628601b5ee3b189d87459c4cef51645b045f782dc8ecd95074745af26f084143abd589ed Jan 24 00:28:53.142821 unknown[666]: fetched base config from "system" Jan 24 00:28:53.142862 unknown[666]: fetched user config from "qemu" Jan 24 00:28:53.143555 ignition[666]: fetch-offline: fetch-offline passed Jan 24 00:28:53.145886 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:28:53.143733 ignition[666]: Ignition finished successfully Jan 24 00:28:53.153083 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:28:53.170946 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:28:53.194448 ignition[792]: Ignition 2.19.0 Jan 24 00:28:53.194461 ignition[792]: Stage: kargs Jan 24 00:28:53.194752 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:53.201719 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:28:53.194765 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:28:53.195496 ignition[792]: kargs: kargs passed Jan 24 00:28:53.195543 ignition[792]: Ignition finished successfully Jan 24 00:28:53.227900 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:28:53.241925 ignition[800]: Ignition 2.19.0 Jan 24 00:28:53.241999 ignition[800]: Stage: disks Jan 24 00:28:53.245278 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:28:53.242295 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:53.252945 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:28:53.242316 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:28:53.261467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:28:53.243338 ignition[800]: disks: disks passed Jan 24 00:28:53.267310 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:28:53.243411 ignition[800]: Ignition finished successfully Jan 24 00:28:53.272015 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:28:53.280822 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:28:53.312084 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:28:53.330458 systemd-resolved[233]: Detected conflict on linux IN A 10.0.0.57 Jan 24 00:28:53.338149 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:28:53.330468 systemd-resolved[233]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jan 24 00:28:53.338308 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:28:53.352786 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:28:53.479767 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:28:53.480256 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:28:53.485007 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:28:53.511848 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:28:53.537226 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Jan 24 00:28:53.537278 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:28:53.537298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:53.537316 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:28:53.517446 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:28:53.549693 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:28:53.537526 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:28:53.537666 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:28:53.537715 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:28:53.552319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:28:53.562179 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:28:53.569222 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:28:53.628802 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:28:53.635089 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:28:53.640803 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:28:53.646372 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:28:53.791680 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:28:53.807860 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:28:53.815140 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:28:53.826202 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:28:53.828114 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:28:53.856361 ignition[931]: INFO : Ignition 2.19.0 Jan 24 00:28:53.856361 ignition[931]: INFO : Stage: mount Jan 24 00:28:53.866859 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:53.866859 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:28:53.866859 ignition[931]: INFO : mount: mount passed Jan 24 00:28:53.866859 ignition[931]: INFO : Ignition finished successfully Jan 24 00:28:53.859914 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:28:53.869368 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:28:53.884052 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:28:53.896759 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:28:53.918486 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 24 00:28:53.918547 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:28:53.918563 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:53.924910 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:28:53.933747 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:28:53.936025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:28:53.980363 ignition[961]: INFO : Ignition 2.19.0 Jan 24 00:28:53.980363 ignition[961]: INFO : Stage: files Jan 24 00:28:53.987272 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:53.987272 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:28:53.987272 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:28:53.998070 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:28:53.998070 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:28:53.998070 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:28:54.010201 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:28:54.014068 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:28:54.010529 unknown[961]: wrote ssh authorized keys file for user: core Jan 24 00:28:54.021914 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:28:54.028457 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:28:54.073881 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:28:54.196543 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:28:54.196543 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:28:54.208083 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:28:54.505349 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:28:54.984186 systemd-networkd[787]: eth0: Gained IPv6LL Jan 24 00:28:56.373515 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:28:56.373515 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:28:56.386371 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:28:56.394487 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:28:56.394487 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:28:56.394487 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:28:56.394487 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:28:56.419196 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:28:56.419196 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:28:56.419196 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:28:56.496311 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:28:56.507394 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:28:56.515655 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:28:56.515655 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:28:56.529396 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:28:56.535864 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:28:56.543979 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:28:56.555073 ignition[961]: INFO : files: files passed Jan 24 00:28:56.555073 ignition[961]: INFO : Ignition finished successfully Jan 24 00:28:56.565892 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:28:56.587952 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:28:56.594328 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:28:56.611198 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:28:56.611429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:28:56.625732 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:28:56.637368 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:28:56.637368 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:28:56.627769 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:28:56.665798 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:28:56.638261 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:28:56.673074 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:28:56.721514 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:28:56.721826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:28:56.733136 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:28:56.743468 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:28:56.753286 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:28:56.771074 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:28:56.798273 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:28:56.824941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:28:56.845685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:28:56.853889 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:28:56.866499 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:28:56.875085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:28:56.875293 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:28:56.883041 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:28:56.889920 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:28:56.899851 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:28:56.906793 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:28:56.916377 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:28:56.946430 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:28:56.954363 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:28:56.965436 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:28:56.976051 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:28:56.986973 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:28:56.996237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:28:56.996451 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:28:57.006741 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:28:57.015692 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:28:57.026414 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:28:57.026844 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:28:57.037720 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:28:57.037961 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:28:57.048702 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:28:57.048902 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:28:57.059520 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:28:57.067365 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:28:57.067950 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:28:57.078361 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:28:57.087162 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:28:57.096274 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:28:57.096470 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:28:57.169103 ignition[1015]: INFO : Ignition 2.19.0 Jan 24 00:28:57.169103 ignition[1015]: INFO : Stage: umount Jan 24 00:28:57.169103 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:57.169103 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:28:57.106193 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:28:57.210479 ignition[1015]: INFO : umount: umount passed Jan 24 00:28:57.210479 ignition[1015]: INFO : Ignition finished successfully Jan 24 00:28:57.106412 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:28:57.111327 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:28:57.111565 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:28:57.113717 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:28:57.113938 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:28:57.146314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:28:57.154451 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:28:57.154773 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:28:57.166243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:28:57.176812 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:28:57.177185 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:28:57.185816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:28:57.186094 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:28:57.202966 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:28:57.204330 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:28:57.204534 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:28:57.211201 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:28:57.211400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:28:57.221285 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:28:57.221473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:28:57.228927 systemd[1]: Stopped target network.target - Network. Jan 24 00:28:57.237133 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:28:57.237254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:28:57.248503 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:28:57.248775 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:28:57.258422 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:28:57.258515 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:28:57.267875 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:28:57.267984 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:28:57.277100 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:28:57.277206 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:28:57.284047 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:28:57.290342 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:28:57.297767 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 24 00:28:57.302402 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:28:57.302717 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:28:57.310675 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:28:57.310932 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:28:57.317911 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:28:57.318040 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:28:57.342931 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:28:57.348959 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:28:57.349139 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:28:57.356540 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:28:57.356719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:28:57.364058 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:28:57.364165 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:28:57.364356 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:28:57.364427 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:28:57.366723 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:28:57.381890 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:28:57.382145 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:28:57.391701 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:28:57.391982 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:28:57.399977 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:28:57.575523 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 24 00:28:57.400131 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:28:57.406529 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:28:57.406743 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:28:57.414514 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:28:57.414814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:28:57.422202 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:28:57.422295 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:28:57.429690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:28:57.429802 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:28:57.457215 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:28:57.464812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:28:57.464966 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:28:57.472443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:28:57.472562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:57.479931 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:28:57.480203 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:28:57.486704 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:28:57.513058 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:28:57.528464 systemd[1]: Switching root. Jan 24 00:28:57.676699 systemd-journald[194]: Journal stopped Jan 24 00:28:59.501170 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:28:59.501271 kernel: SELinux: policy capability open_perms=1 Jan 24 00:28:59.501292 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:28:59.501316 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:28:59.501344 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:28:59.501362 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:28:59.501379 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:28:59.501396 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:28:59.501412 kernel: audit: type=1403 audit(1769214537.803:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:28:59.501431 systemd[1]: Successfully loaded SELinux policy in 85.483ms. Jan 24 00:28:59.501472 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.875ms. Jan 24 00:28:59.501503 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:28:59.501521 systemd[1]: Detected virtualization kvm. Jan 24 00:28:59.501538 systemd[1]: Detected architecture x86-64. Jan 24 00:28:59.501558 systemd[1]: Detected first boot. Jan 24 00:28:59.501575 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:28:59.501674 zram_generator::config[1058]: No configuration found. Jan 24 00:28:59.501698 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:28:59.501717 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:28:59.501734 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:28:59.501759 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:28:59.501778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:28:59.501796 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:28:59.501814 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:28:59.501833 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:28:59.501853 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:28:59.501872 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:28:59.501891 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:28:59.501915 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:28:59.501934 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:28:59.501953 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:28:59.501972 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:28:59.501997 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:28:59.502063 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:28:59.502090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:28:59.502112 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:28:59.502131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:28:59.502156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:28:59.502177 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:28:59.502196 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:28:59.502216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:28:59.502235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:28:59.502254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:28:59.502272 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:28:59.502290 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:28:59.502313 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:28:59.502332 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:28:59.502350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:28:59.502367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:28:59.502385 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:28:59.502403 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:28:59.502421 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:28:59.502440 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:28:59.502459 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:28:59.502484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:59.502504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:28:59.502526 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:28:59.502545 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:28:59.502563 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:28:59.502656 systemd[1]: Reached target machines.target - Containers. Jan 24 00:28:59.502681 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:28:59.502699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:28:59.502722 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:28:59.502739 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:28:59.502757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:28:59.502773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:28:59.502792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:28:59.502810 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:28:59.502829 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:28:59.502847 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:28:59.502866 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:28:59.502889 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:28:59.502907 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:28:59.502926 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:28:59.502943 kernel: fuse: init (API version 7.39) Jan 24 00:28:59.502961 kernel: loop: module loaded Jan 24 00:28:59.502978 kernel: ACPI: bus type drm_connector registered Jan 24 00:28:59.502999 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:28:59.503071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:28:59.503096 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:28:59.503120 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:28:59.503169 systemd-journald[1142]: Collecting audit messages is disabled. Jan 24 00:28:59.503205 systemd-journald[1142]: Journal started Jan 24 00:28:59.503237 systemd-journald[1142]: Runtime Journal (/run/log/journal/c424f75fba584d54b96a0e02f74b6b8d) is 6.0M, max 48.3M, 42.2M free. Jan 24 00:28:58.765917 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:28:58.811085 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:28:58.812365 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:28:58.813151 systemd[1]: systemd-journald.service: Consumed 2.143s CPU time. Jan 24 00:28:59.522968 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:28:59.526720 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:28:59.526778 systemd[1]: Stopped verity-setup.service. Jan 24 00:28:59.540740 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:59.550970 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:28:59.552798 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:28:59.557497 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:28:59.562752 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:28:59.567343 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:28:59.573170 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:28:59.578784 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:28:59.583468 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:28:59.588972 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:28:59.595109 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:28:59.595401 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:28:59.605885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:28:59.606269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:28:59.611945 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:28:59.612311 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:28:59.616303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:28:59.616657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:28:59.622570 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:28:59.622943 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:28:59.628898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:28:59.629306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:28:59.635434 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:28:59.642247 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:28:59.651773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:28:59.676713 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:28:59.695950 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:28:59.704140 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:28:59.710111 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:28:59.710231 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:28:59.717551 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:28:59.726546 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:28:59.734674 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:28:59.740302 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:28:59.743147 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:28:59.753683 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:28:59.761279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:28:59.763899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:28:59.771075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:28:59.774928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:28:59.785153 systemd-journald[1142]: Time spent on flushing to /var/log/journal/c424f75fba584d54b96a0e02f74b6b8d is 20.202ms for 986 entries. Jan 24 00:28:59.785153 systemd-journald[1142]: System Journal (/var/log/journal/c424f75fba584d54b96a0e02f74b6b8d) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:28:59.889940 systemd-journald[1142]: Received client request to flush runtime journal. Jan 24 00:28:59.882445 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:28:59.894835 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:28:59.903959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:28:59.911062 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:28:59.916368 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:28:59.924570 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:28:59.934218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:28:59.956827 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 00:28:59.948429 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:28:59.971554 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:28:59.991018 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:29:00.216660 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:29:00.270388 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:29:00.324106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:29:00.330283 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:29:00.331450 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:29:00.342774 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:29:00.359839 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:29:00.360842 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:29:00.413221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:29:00.694857 kernel: loop2: detected capacity change from 0 to 219144 Jan 24 00:29:00.807973 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:29:00.888573 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:29:00.890335 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 24 00:29:00.890363 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 24 00:29:00.915899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:29:00.938695 kernel: loop5: detected capacity change from 0 to 219144 Jan 24 00:29:00.969388 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:29:00.970283 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 24 00:29:00.975893 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:29:00.976102 systemd[1]: Reloading... Jan 24 00:29:01.436712 zram_generator::config[1227]: No configuration found. Jan 24 00:29:01.685520 kernel: hrtimer: interrupt took 3780637 ns Jan 24 00:29:01.916576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:29:02.123926 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:29:02.127391 systemd[1]: Reloading finished in 1150 ms. Jan 24 00:29:02.168752 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:29:02.175269 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:29:02.197013 systemd[1]: Starting ensure-sysext.service... Jan 24 00:29:02.205888 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:29:02.219813 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:29:02.219862 systemd[1]: Reloading... Jan 24 00:29:02.419827 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:29:02.420826 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:29:02.422122 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:29:02.422452 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 24 00:29:02.422556 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 24 00:29:02.429744 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:29:02.429760 systemd-tmpfiles[1262]: Skipping /boot Jan 24 00:29:02.436145 zram_generator::config[1289]: No configuration found. Jan 24 00:29:02.460930 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:29:02.460970 systemd-tmpfiles[1262]: Skipping /boot Jan 24 00:29:02.769918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:29:02.842507 systemd[1]: Reloading finished in 621 ms. Jan 24 00:29:02.869914 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:29:02.887699 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:29:02.915202 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:29:02.923233 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:29:02.931304 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:29:02.942456 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:29:02.955024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:29:02.963725 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:29:02.974326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:02.974568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:29:02.977976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:29:02.993477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:29:03.004779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:29:03.007975 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Jan 24 00:29:03.011226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:29:03.012137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:03.016234 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:29:03.019329 augenrules[1352]: No rules Jan 24 00:29:03.024734 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:29:03.033133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:29:03.033517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:29:03.063460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:29:03.063929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:29:03.083403 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:29:03.083730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:29:03.097791 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:29:03.151349 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:29:03.160720 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:29:03.195880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:03.196877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:29:03.209200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:29:03.220946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:29:03.236195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:29:03.257358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:29:03.268447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:29:03.282047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:29:03.292918 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:29:03.304874 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:29:03.309924 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:29:03.310007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:03.311673 systemd[1]: Finished ensure-sysext.service. Jan 24 00:29:03.317407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:29:03.317865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:29:03.328228 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:29:03.335094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:29:03.343987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:29:03.344473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:29:03.354382 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:29:03.354745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:29:03.364550 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:29:03.381849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:29:03.382018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:29:03.397926 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:29:03.408245 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:29:03.444731 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:29:03.455896 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:29:03.476705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1371) Jan 24 00:29:03.500677 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:29:03.515955 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:29:03.516676 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:29:03.516952 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:29:03.519667 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:29:03.551679 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:29:04.169197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:29:04.191885 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:29:04.281030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:04.293026 systemd-networkd[1390]: lo: Link UP Jan 24 00:29:04.293104 systemd-networkd[1390]: lo: Gained carrier Jan 24 00:29:04.302018 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:29:04.302765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:29:04.303144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:04.311444 systemd-networkd[1390]: Enumeration completed Jan 24 00:29:04.350335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:04.357771 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:04.357852 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:29:04.360901 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:29:04.361546 systemd-networkd[1390]: eth0: Link UP Jan 24 00:29:04.361636 systemd-networkd[1390]: eth0: Gained carrier Jan 24 00:29:04.361663 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:04.368295 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:29:04.680770 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:29:04.703846 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:29:04.776970 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:29:04.784032 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:29:04.789570 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:29:04.789815 systemd-timesyncd[1401]: Initial clock synchronization to Sat 2026-01-24 00:29:04.881479 UTC. Jan 24 00:29:04.843770 systemd-resolved[1335]: Positive Trust Anchors: Jan 24 00:29:04.843823 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:29:04.843870 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:29:04.851407 kernel: kvm_amd: TSC scaling supported Jan 24 00:29:04.851491 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:29:04.851514 kernel: kvm_amd: Nested Paging enabled Jan 24 00:29:04.852965 systemd-resolved[1335]: Defaulting to hostname 'linux'. Jan 24 00:29:04.857920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:29:04.858706 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:29:04.858746 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:29:04.862137 systemd[1]: Reached target network.target - Network. Jan 24 00:29:04.866674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:29:05.022980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:05.035708 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:29:05.093060 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:29:05.115095 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:29:05.140885 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:29:05.192660 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:29:05.197892 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:29:05.201369 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:29:05.205484 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:29:05.209886 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:29:05.214841 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:29:05.220217 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:29:05.224949 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:29:05.229722 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:29:05.229792 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:29:05.233252 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:29:05.240302 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:29:05.246901 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:29:05.265165 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:29:05.274107 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:29:05.281074 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:29:05.286119 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:29:05.290783 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:29:05.295494 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:29:05.295534 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:29:05.298029 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:29:05.300081 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:29:05.309448 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:29:05.317097 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:29:05.325085 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:29:05.329415 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:29:05.333907 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:29:05.339131 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:29:05.342083 jq[1437]: false Jan 24 00:29:05.344760 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:29:05.354108 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:29:05.367380 extend-filesystems[1438]: Found loop3 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found loop4 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found loop5 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found sr0 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda1 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda2 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda3 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found usr Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda4 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda6 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda7 Jan 24 00:29:05.370643 extend-filesystems[1438]: Found vda9 Jan 24 00:29:05.370643 extend-filesystems[1438]: Checking size of /dev/vda9 Jan 24 00:29:05.572229 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1369) Jan 24 00:29:05.572261 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:29:05.577817 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:29:05.469699 dbus-daemon[1436]: [system] SELinux support is enabled Jan 24 00:29:05.599779 extend-filesystems[1438]: Resized partition /dev/vda9 Jan 24 00:29:05.375855 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:29:05.603914 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:29:05.603914 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:29:05.603914 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:29:05.603914 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:29:05.385943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:29:05.629439 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Jan 24 00:29:05.388116 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:29:05.633500 update_engine[1450]: I20260124 00:29:05.517058 1450 main.cc:92] Flatcar Update Engine starting Jan 24 00:29:05.633500 update_engine[1450]: I20260124 00:29:05.521781 1450 update_check_scheduler.cc:74] Next update check in 2m13s Jan 24 00:29:05.396077 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:29:05.846475 jq[1458]: true Jan 24 00:29:05.422921 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:29:05.433386 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:29:05.847166 tar[1461]: linux-amd64/LICENSE Jan 24 00:29:05.847166 tar[1461]: linux-amd64/helm Jan 24 00:29:05.455253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:29:05.847704 jq[1463]: true Jan 24 00:29:05.455725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:29:05.456123 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:29:05.456407 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:29:05.468393 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:29:05.468801 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:29:05.479799 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:29:05.542453 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:29:05.542481 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:29:05.543803 systemd-logind[1444]: New seat seat0. Jan 24 00:29:05.554484 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:29:05.563262 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:29:05.576376 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:29:05.584423 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:29:05.584690 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:29:05.590915 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:29:05.591021 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:29:05.615119 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:29:05.837040 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:29:05.837418 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:29:05.842192 systemd-networkd[1390]: eth0: Gained IPv6LL Jan 24 00:29:05.875560 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:29:05.907783 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:29:05.920772 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:29:05.925026 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:29:05.944058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:05.955820 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:29:05.963089 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:29:06.180158 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:29:06.227301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:29:06.240347 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:29:06.290322 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:29:06.313333 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:29:06.313777 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:29:06.323343 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:29:06.477061 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:29:06.499752 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:29:06.524123 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:29:06.524420 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:29:06.536659 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:29:06.721068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:29:06.739358 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:29:06.748581 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:29:06.753941 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:29:07.350887 containerd[1464]: time="2026-01-24T00:29:07.350119277Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:29:07.532688 containerd[1464]: time="2026-01-24T00:29:07.532515719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:07.537014 containerd[1464]: time="2026-01-24T00:29:07.536840506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:07.537014 containerd[1464]: time="2026-01-24T00:29:07.537008059Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:29:07.537163 containerd[1464]: time="2026-01-24T00:29:07.537037234Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:29:07.537498 containerd[1464]: time="2026-01-24T00:29:07.537338605Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:29:07.537498 containerd[1464]: time="2026-01-24T00:29:07.537410676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:07.537693 containerd[1464]: time="2026-01-24T00:29:07.537652182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:07.537721 containerd[1464]: time="2026-01-24T00:29:07.537693946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:07.538137 containerd[1464]: time="2026-01-24T00:29:07.538045232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:07.538137 containerd[1464]: time="2026-01-24T00:29:07.538119701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:07.538185 containerd[1464]: time="2026-01-24T00:29:07.538143040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:07.538185 containerd[1464]: time="2026-01-24T00:29:07.538159857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:07.538495 containerd[1464]: time="2026-01-24T00:29:07.538285281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:07.538904 containerd[1464]: time="2026-01-24T00:29:07.538857362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:07.540022 containerd[1464]: time="2026-01-24T00:29:07.539142231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:07.540022 containerd[1464]: time="2026-01-24T00:29:07.539321069Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:29:07.540198 containerd[1464]: time="2026-01-24T00:29:07.540149652Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:29:07.540332 containerd[1464]: time="2026-01-24T00:29:07.540288536Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:29:07.551843 containerd[1464]: time="2026-01-24T00:29:07.551737028Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:29:07.551970 containerd[1464]: time="2026-01-24T00:29:07.551956124Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:29:07.552005 containerd[1464]: time="2026-01-24T00:29:07.551983378Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:29:07.552184 containerd[1464]: time="2026-01-24T00:29:07.552062174Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:29:07.552235 containerd[1464]: time="2026-01-24T00:29:07.552187376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:29:07.552511 containerd[1464]: time="2026-01-24T00:29:07.552465195Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:29:07.554254 tar[1461]: linux-amd64/README.md Jan 24 00:29:07.554952 containerd[1464]: time="2026-01-24T00:29:07.554904236Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:29:07.555376 containerd[1464]: time="2026-01-24T00:29:07.555300826Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:29:07.555376 containerd[1464]: time="2026-01-24T00:29:07.555370117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:29:07.555450 containerd[1464]: time="2026-01-24T00:29:07.555391838Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:29:07.555450 containerd[1464]: time="2026-01-24T00:29:07.555413074Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555668 containerd[1464]: time="2026-01-24T00:29:07.555436009Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555668 containerd[1464]: time="2026-01-24T00:29:07.555519568Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555668 containerd[1464]: time="2026-01-24T00:29:07.555543615Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555668 containerd[1464]: time="2026-01-24T00:29:07.555564599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555819 containerd[1464]: time="2026-01-24T00:29:07.555689154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555819 containerd[1464]: time="2026-01-24T00:29:07.555714253Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555819 containerd[1464]: time="2026-01-24T00:29:07.555730170Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:29:07.555819 containerd[1464]: time="2026-01-24T00:29:07.555809077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.555948 containerd[1464]: time="2026-01-24T00:29:07.555831002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.555948 containerd[1464]: time="2026-01-24T00:29:07.555848526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.555948 containerd[1464]: time="2026-01-24T00:29:07.555866334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.555948 containerd[1464]: time="2026-01-24T00:29:07.555882848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.555948 containerd[1464]: time="2026-01-24T00:29:07.555909150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.555948 containerd[1464]: time="2026-01-24T00:29:07.555926655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556120 containerd[1464]: time="2026-01-24T00:29:07.555956193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556120 containerd[1464]: time="2026-01-24T00:29:07.555975174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556120 containerd[1464]: time="2026-01-24T00:29:07.555994469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556120 containerd[1464]: time="2026-01-24T00:29:07.556010953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556120 containerd[1464]: time="2026-01-24T00:29:07.556028346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556120 containerd[1464]: time="2026-01-24T00:29:07.556046761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556120 containerd[1464]: time="2026-01-24T00:29:07.556068482Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:29:07.556409 containerd[1464]: time="2026-01-24T00:29:07.556175341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556409 containerd[1464]: time="2026-01-24T00:29:07.556195918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556409 containerd[1464]: time="2026-01-24T00:29:07.556210663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:29:07.556409 containerd[1464]: time="2026-01-24T00:29:07.556342822Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:29:07.556409 containerd[1464]: time="2026-01-24T00:29:07.556396853Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:29:07.556548 containerd[1464]: time="2026-01-24T00:29:07.556414428Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:29:07.556548 containerd[1464]: time="2026-01-24T00:29:07.556430608Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:29:07.556697 containerd[1464]: time="2026-01-24T00:29:07.556582164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.556697 containerd[1464]: time="2026-01-24T00:29:07.556689791Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:29:07.556788 containerd[1464]: time="2026-01-24T00:29:07.556744336Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:29:07.556788 containerd[1464]: time="2026-01-24T00:29:07.556764319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:29:07.558338 containerd[1464]: time="2026-01-24T00:29:07.558181225Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:29:07.558338 containerd[1464]: time="2026-01-24T00:29:07.558320311Z" level=info msg="Connect containerd service" Jan 24 00:29:07.558795 containerd[1464]: time="2026-01-24T00:29:07.558449204Z" level=info msg="using legacy CRI server" Jan 24 00:29:07.558795 containerd[1464]: time="2026-01-24T00:29:07.558465182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:29:07.559065 containerd[1464]: time="2026-01-24T00:29:07.558938404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:29:07.561133 containerd[1464]: time="2026-01-24T00:29:07.560877162Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:29:07.562308 containerd[1464]: time="2026-01-24T00:29:07.561763346Z" level=info msg="Start subscribing containerd event" Jan 24 00:29:07.562308 containerd[1464]: time="2026-01-24T00:29:07.562158956Z" level=info msg="Start recovering state" Jan 24 00:29:07.562385 containerd[1464]: time="2026-01-24T00:29:07.562365552Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:29:07.562547 containerd[1464]: time="2026-01-24T00:29:07.562442023Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:29:07.562677 containerd[1464]: time="2026-01-24T00:29:07.562444106Z" level=info msg="Start event monitor" Jan 24 00:29:07.562677 containerd[1464]: time="2026-01-24T00:29:07.562655779Z" level=info msg="Start snapshots syncer" Jan 24 00:29:07.562747 containerd[1464]: time="2026-01-24T00:29:07.562684661Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:29:07.562747 containerd[1464]: time="2026-01-24T00:29:07.562692973Z" level=info msg="Start streaming server" Jan 24 00:29:07.563662 containerd[1464]: time="2026-01-24T00:29:07.562971035Z" level=info msg="containerd successfully booted in 0.216581s" Jan 24 00:29:07.566880 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:29:07.594875 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:29:08.346806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:08.352155 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:29:08.357034 systemd[1]: Startup finished in 1.083s (kernel) + 7.986s (initrd) + 10.632s (userspace) = 19.703s. Jan 24 00:29:08.357700 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:29:09.139211 kubelet[1549]: E0124 00:29:09.139091 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:29:09.144272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:29:09.144681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:29:09.145253 systemd[1]: kubelet.service: Consumed 1.817s CPU time. Jan 24 00:29:15.061890 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:29:15.063930 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:46762.service - OpenSSH per-connection server daemon (10.0.0.1:46762). Jan 24 00:29:15.152775 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 46762 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:29:15.156132 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:15.174792 systemd-logind[1444]: New session 1 of user core. Jan 24 00:29:15.176883 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:29:15.191288 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:29:15.216398 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:29:15.236290 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:29:15.242064 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:29:15.415354 systemd[1568]: Queued start job for default target default.target. Jan 24 00:29:15.427578 systemd[1568]: Created slice app.slice - User Application Slice. Jan 24 00:29:15.427696 systemd[1568]: Reached target paths.target - Paths. Jan 24 00:29:15.427715 systemd[1568]: Reached target timers.target - Timers. Jan 24 00:29:15.430060 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:29:15.524275 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:29:15.524537 systemd[1568]: Reached target sockets.target - Sockets. Jan 24 00:29:15.524562 systemd[1568]: Reached target basic.target - Basic System. Jan 24 00:29:15.524755 systemd[1568]: Reached target default.target - Main User Target. Jan 24 00:29:15.524816 systemd[1568]: Startup finished in 269ms. Jan 24 00:29:15.525511 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:29:15.542349 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:29:15.614831 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:46774.service - OpenSSH per-connection server daemon (10.0.0.1:46774). Jan 24 00:29:15.685298 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 46774 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:29:15.687838 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:15.694897 systemd-logind[1444]: New session 2 of user core. Jan 24 00:29:15.704922 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:29:15.769541 sshd[1579]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:15.780815 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:46774.service: Deactivated successfully. Jan 24 00:29:15.783345 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:29:15.786933 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:29:15.796002 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:46790.service - OpenSSH per-connection server daemon (10.0.0.1:46790). Jan 24 00:29:15.797095 systemd-logind[1444]: Removed session 2. Jan 24 00:29:15.841107 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 46790 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:29:15.843779 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:15.850311 systemd-logind[1444]: New session 3 of user core. Jan 24 00:29:15.859986 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:29:15.916479 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:15.930381 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:46790.service: Deactivated successfully. Jan 24 00:29:15.933069 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:29:15.934782 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:29:15.945331 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:46794.service - OpenSSH per-connection server daemon (10.0.0.1:46794). Jan 24 00:29:15.947440 systemd-logind[1444]: Removed session 3. Jan 24 00:29:15.990515 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 46794 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:29:15.992814 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:15.999562 systemd-logind[1444]: New session 4 of user core. Jan 24 00:29:16.014055 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:29:16.078270 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:16.090208 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:46794.service: Deactivated successfully. Jan 24 00:29:16.092980 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:29:16.095743 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:29:16.109388 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:46798.service - OpenSSH per-connection server daemon (10.0.0.1:46798). Jan 24 00:29:16.111253 systemd-logind[1444]: Removed session 4. Jan 24 00:29:16.151359 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 46798 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:29:16.153541 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:16.159956 systemd-logind[1444]: New session 5 of user core. Jan 24 00:29:16.175036 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:29:16.249312 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:29:16.249902 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:16.267340 sudo[1604]: pam_unix(sudo:session): session closed for user root Jan 24 00:29:16.270866 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:16.281844 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:46798.service: Deactivated successfully. Jan 24 00:29:16.284287 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:29:16.286237 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:29:16.297327 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:46800.service - OpenSSH per-connection server daemon (10.0.0.1:46800). Jan 24 00:29:16.298977 systemd-logind[1444]: Removed session 5. Jan 24 00:29:16.344318 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 46800 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:29:16.346747 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:16.353252 systemd-logind[1444]: New session 6 of user core. Jan 24 00:29:16.366935 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:29:16.428216 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:29:16.428769 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:16.434465 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 24 00:29:16.444152 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:29:16.444777 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:16.469162 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:29:16.473925 auditctl[1617]: No rules Jan 24 00:29:16.474670 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:29:16.475063 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:29:16.479217 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:29:16.537713 augenrules[1635]: No rules Jan 24 00:29:16.539923 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:29:16.541888 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 24 00:29:16.544883 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:16.562924 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:46800.service: Deactivated successfully. Jan 24 00:29:16.565406 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:29:16.567917 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:29:16.578153 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:46806.service - OpenSSH per-connection server daemon (10.0.0.1:46806). Jan 24 00:29:16.579764 systemd-logind[1444]: Removed session 6. Jan 24 00:29:16.631671 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 46806 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:29:16.634056 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:16.642188 systemd-logind[1444]: New session 7 of user core. Jan 24 00:29:16.652984 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:29:16.713451 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:29:16.713915 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:18.806691 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:29:18.856985 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:29:19.298259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:29:19.319827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:20.314346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:20.320425 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:29:20.644091 kubelet[1679]: E0124 00:29:20.643777 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:29:20.649919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:29:20.650103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:29:20.650479 systemd[1]: kubelet.service: Consumed 1.480s CPU time. Jan 24 00:29:21.214073 dockerd[1665]: time="2026-01-24T00:29:21.213840266Z" level=info msg="Starting up" Jan 24 00:29:21.683304 systemd[1]: var-lib-docker-metacopy\x2dcheck3466738204-merged.mount: Deactivated successfully. Jan 24 00:29:21.708361 dockerd[1665]: time="2026-01-24T00:29:21.708213455Z" level=info msg="Loading containers: start." Jan 24 00:29:21.902649 kernel: Initializing XFRM netlink socket Jan 24 00:29:22.011097 systemd-networkd[1390]: docker0: Link UP Jan 24 00:29:22.039851 dockerd[1665]: time="2026-01-24T00:29:22.039789746Z" level=info msg="Loading containers: done." Jan 24 00:29:22.176701 dockerd[1665]: time="2026-01-24T00:29:22.176543846Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:29:22.177077 dockerd[1665]: time="2026-01-24T00:29:22.177011387Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:29:22.177319 dockerd[1665]: time="2026-01-24T00:29:22.177264437Z" level=info msg="Daemon has completed initialization" Jan 24 00:29:22.239348 dockerd[1665]: time="2026-01-24T00:29:22.239032974Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:29:22.239527 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:29:24.499436 containerd[1464]: time="2026-01-24T00:29:24.499305420Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 00:29:25.607903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997937209.mount: Deactivated successfully. Jan 24 00:29:30.061205 containerd[1464]: time="2026-01-24T00:29:30.061088374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:30.062394 containerd[1464]: time="2026-01-24T00:29:30.062224608Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 24 00:29:30.063554 containerd[1464]: time="2026-01-24T00:29:30.063501392Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:30.067964 containerd[1464]: time="2026-01-24T00:29:30.067887352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:30.069969 containerd[1464]: time="2026-01-24T00:29:30.069848145Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 5.570444502s" Jan 24 00:29:30.070076 containerd[1464]: time="2026-01-24T00:29:30.069996857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 00:29:30.077474 containerd[1464]: time="2026-01-24T00:29:30.077410431Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 00:29:30.852542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:29:30.871223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:31.414076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:31.447802 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:29:31.660124 kubelet[1898]: E0124 00:29:31.659744 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:29:31.667079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:29:31.667383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:29:32.444107 containerd[1464]: time="2026-01-24T00:29:32.444004798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:32.446507 containerd[1464]: time="2026-01-24T00:29:32.446353155Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 24 00:29:32.448643 containerd[1464]: time="2026-01-24T00:29:32.448500314Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:32.453185 containerd[1464]: time="2026-01-24T00:29:32.453046906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:32.455917 containerd[1464]: time="2026-01-24T00:29:32.455810928Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.378359813s" Jan 24 00:29:32.455917 containerd[1464]: time="2026-01-24T00:29:32.455924448Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 00:29:32.470658 containerd[1464]: time="2026-01-24T00:29:32.468035650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 00:29:34.678373 containerd[1464]: time="2026-01-24T00:29:34.678219870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:34.679936 containerd[1464]: time="2026-01-24T00:29:34.679400917Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 24 00:29:34.681353 containerd[1464]: time="2026-01-24T00:29:34.681276271Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:34.687676 containerd[1464]: time="2026-01-24T00:29:34.687628903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:34.690328 containerd[1464]: time="2026-01-24T00:29:34.690130765Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 2.222030383s" Jan 24 00:29:34.690497 containerd[1464]: time="2026-01-24T00:29:34.690255479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 00:29:34.692935 containerd[1464]: time="2026-01-24T00:29:34.692834169Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:29:37.073169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168380819.mount: Deactivated successfully. Jan 24 00:29:38.528051 containerd[1464]: time="2026-01-24T00:29:38.527563932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:38.529441 containerd[1464]: time="2026-01-24T00:29:38.528690603Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 24 00:29:38.530031 containerd[1464]: time="2026-01-24T00:29:38.529926733Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:38.533764 containerd[1464]: time="2026-01-24T00:29:38.533696258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:38.535135 containerd[1464]: time="2026-01-24T00:29:38.535037972Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.842113946s" Jan 24 00:29:38.535135 containerd[1464]: time="2026-01-24T00:29:38.535112388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:29:38.537371 containerd[1464]: time="2026-01-24T00:29:38.537311718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 00:29:39.290896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033526076.mount: Deactivated successfully. Jan 24 00:29:41.806983 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:29:41.859108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:42.507052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:42.532659 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:29:42.944733 kubelet[1981]: E0124 00:29:42.944430 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:29:42.956527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:29:42.958148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:29:43.287885 containerd[1464]: time="2026-01-24T00:29:43.287478408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:43.289266 containerd[1464]: time="2026-01-24T00:29:43.289143647Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 24 00:29:43.290683 containerd[1464]: time="2026-01-24T00:29:43.290530883Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:43.295312 containerd[1464]: time="2026-01-24T00:29:43.295227384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:43.297826 containerd[1464]: time="2026-01-24T00:29:43.297725217Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.760376897s" Jan 24 00:29:43.297930 containerd[1464]: time="2026-01-24T00:29:43.297873652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 00:29:43.300107 containerd[1464]: time="2026-01-24T00:29:43.300061555Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 00:29:43.986540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914907936.mount: Deactivated successfully. Jan 24 00:29:44.007279 containerd[1464]: time="2026-01-24T00:29:44.007131207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:44.013227 containerd[1464]: time="2026-01-24T00:29:44.012692193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 24 00:29:44.020690 containerd[1464]: time="2026-01-24T00:29:44.020431066Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:44.031153 containerd[1464]: time="2026-01-24T00:29:44.027244097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:44.031153 containerd[1464]: time="2026-01-24T00:29:44.029947746Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 729.812691ms" Jan 24 00:29:44.031153 containerd[1464]: time="2026-01-24T00:29:44.030211021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 00:29:44.041664 containerd[1464]: time="2026-01-24T00:29:44.041321224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 00:29:44.607285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987110521.mount: Deactivated successfully. Jan 24 00:29:48.294139 containerd[1464]: time="2026-01-24T00:29:48.294025065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:48.295496 containerd[1464]: time="2026-01-24T00:29:48.294860173Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 24 00:29:48.296145 containerd[1464]: time="2026-01-24T00:29:48.296097825Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:48.302480 containerd[1464]: time="2026-01-24T00:29:48.302422310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:48.303859 containerd[1464]: time="2026-01-24T00:29:48.303739966Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.26234508s" Jan 24 00:29:48.303859 containerd[1464]: time="2026-01-24T00:29:48.303782747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 00:29:50.682917 update_engine[1450]: I20260124 00:29:50.682726 1450 update_attempter.cc:509] Updating boot flags... Jan 24 00:29:50.725678 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2078) Jan 24 00:29:50.787473 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2078) Jan 24 00:29:50.826510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2078) Jan 24 00:29:52.704996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:52.718082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:52.757073 systemd[1]: Reloading requested from client PID 2094 ('systemctl') (unit session-7.scope)... Jan 24 00:29:52.757129 systemd[1]: Reloading... Jan 24 00:29:52.841728 zram_generator::config[2133]: No configuration found. Jan 24 00:29:52.967898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:29:53.040831 systemd[1]: Reloading finished in 283 ms. Jan 24 00:29:53.105128 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:29:53.105242 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:29:53.105570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:53.108691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:53.285233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:53.293128 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:29:53.358108 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:29:53.358108 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:29:53.413374 kubelet[2182]: I0124 00:29:53.413213 2182 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:29:53.648673 kubelet[2182]: I0124 00:29:53.648477 2182 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:29:53.648673 kubelet[2182]: I0124 00:29:53.648636 2182 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:29:53.648673 kubelet[2182]: I0124 00:29:53.648680 2182 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:29:53.648828 kubelet[2182]: I0124 00:29:53.648693 2182 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:29:53.649127 kubelet[2182]: I0124 00:29:53.649067 2182 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:29:53.656635 kubelet[2182]: E0124 00:29:53.656471 2182 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:29:53.659485 kubelet[2182]: I0124 00:29:53.659409 2182 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:29:53.667842 kubelet[2182]: E0124 00:29:53.667726 2182 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:29:53.667990 kubelet[2182]: I0124 00:29:53.667865 2182 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:29:53.677457 kubelet[2182]: I0124 00:29:53.677349 2182 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:29:53.678425 kubelet[2182]: I0124 00:29:53.678334 2182 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:29:53.678709 kubelet[2182]: I0124 00:29:53.678388 2182 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:29:53.678709 kubelet[2182]: I0124 00:29:53.678698 2182 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:29:53.678906 kubelet[2182]: I0124 00:29:53.678713 2182 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:29:53.678906 kubelet[2182]: I0124 00:29:53.678825 2182 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:29:53.684885 kubelet[2182]: I0124 00:29:53.684795 2182 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:29:53.686280 kubelet[2182]: I0124 00:29:53.686190 2182 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:29:53.686280 kubelet[2182]: I0124 00:29:53.686227 2182 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:29:53.686280 kubelet[2182]: I0124 00:29:53.686250 2182 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:29:53.686280 kubelet[2182]: I0124 00:29:53.686269 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:29:53.687226 kubelet[2182]: E0124 00:29:53.687106 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:29:53.687226 kubelet[2182]: E0124 00:29:53.687178 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:29:53.688261 kubelet[2182]: I0124 00:29:53.688205 2182 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:29:53.688757 kubelet[2182]: I0124 00:29:53.688705 2182 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:29:53.688757 kubelet[2182]: I0124 00:29:53.688752 2182 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:29:53.688849 kubelet[2182]: W0124 00:29:53.688799 2182 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:29:53.693005 kubelet[2182]: I0124 00:29:53.692947 2182 server.go:1262] "Started kubelet" Jan 24 00:29:53.695775 kubelet[2182]: I0124 00:29:53.694355 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:29:53.695775 kubelet[2182]: I0124 00:29:53.695414 2182 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:29:53.696206 kubelet[2182]: I0124 00:29:53.696165 2182 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:29:53.696301 kubelet[2182]: I0124 00:29:53.696257 2182 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:29:53.696411 kubelet[2182]: I0124 00:29:53.696344 2182 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:29:53.696815 kubelet[2182]: E0124 00:29:53.696759 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:29:53.697688 kubelet[2182]: I0124 00:29:53.697142 2182 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:29:53.697688 kubelet[2182]: I0124 00:29:53.697220 2182 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:29:53.697843 kubelet[2182]: I0124 00:29:53.697796 2182 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:29:53.697984 kubelet[2182]: E0124 00:29:53.697894 2182 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:29:53.698178 kubelet[2182]: E0124 00:29:53.698124 2182 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:29:53.698311 kubelet[2182]: E0124 00:29:53.698214 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Jan 24 00:29:53.698420 kubelet[2182]: I0124 00:29:53.698356 2182 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:29:53.698420 kubelet[2182]: I0124 00:29:53.698391 2182 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:29:53.698912 kubelet[2182]: I0124 00:29:53.698867 2182 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:29:53.699568 kubelet[2182]: E0124 00:29:53.698134 2182 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d834c9c8076a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:29:53.69289079 +0000 UTC m=+0.393851097,LastTimestamp:2026-01-24 00:29:53.69289079 +0000 UTC m=+0.393851097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:29:53.700230 kubelet[2182]: I0124 00:29:53.699878 2182 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:29:53.700230 kubelet[2182]: I0124 00:29:53.700205 2182 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:29:53.718701 kubelet[2182]: I0124 00:29:53.718655 2182 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:29:53.718701 kubelet[2182]: I0124 00:29:53.718687 2182 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:29:53.718701 kubelet[2182]: I0124 00:29:53.718703 2182 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:29:53.721221 kubelet[2182]: I0124 00:29:53.721180 2182 policy_none.go:49] "None policy: Start" Jan 24 00:29:53.721221 kubelet[2182]: I0124 00:29:53.721214 2182 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:29:53.721385 kubelet[2182]: I0124 00:29:53.721226 2182 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:29:53.723448 kubelet[2182]: I0124 00:29:53.723403 2182 policy_none.go:47] "Start" Jan 24 00:29:53.729743 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:29:53.736315 kubelet[2182]: I0124 00:29:53.733870 2182 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:29:53.736315 kubelet[2182]: I0124 00:29:53.735930 2182 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:29:53.736315 kubelet[2182]: I0124 00:29:53.735971 2182 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:29:53.736315 kubelet[2182]: I0124 00:29:53.735998 2182 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:29:53.736315 kubelet[2182]: E0124 00:29:53.736039 2182 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:29:53.736967 kubelet[2182]: E0124 00:29:53.736948 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:29:53.744459 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:29:53.748477 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:29:53.767081 kubelet[2182]: E0124 00:29:53.766959 2182 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:29:53.767327 kubelet[2182]: I0124 00:29:53.767246 2182 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:29:53.767327 kubelet[2182]: I0124 00:29:53.767297 2182 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:29:53.768136 kubelet[2182]: I0124 00:29:53.768045 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:29:53.768394 kubelet[2182]: E0124 00:29:53.768345 2182 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:29:53.768539 kubelet[2182]: E0124 00:29:53.768419 2182 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:29:53.852668 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 24 00:29:53.862234 kubelet[2182]: E0124 00:29:53.862083 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:53.866229 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 24 00:29:53.869150 kubelet[2182]: I0124 00:29:53.869099 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:29:53.869406 kubelet[2182]: E0124 00:29:53.869329 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:53.869453 kubelet[2182]: E0124 00:29:53.869427 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Jan 24 00:29:53.871209 systemd[1]: Created slice kubepods-burstable-pod0a13708d5fb5b16df0d003be70ee7155.slice - libcontainer container kubepods-burstable-pod0a13708d5fb5b16df0d003be70ee7155.slice. Jan 24 00:29:53.873492 kubelet[2182]: E0124 00:29:53.873386 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:53.898040 kubelet[2182]: I0124 00:29:53.897987 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:29:53.898040 kubelet[2182]: I0124 00:29:53.898079 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:29:53.898040 kubelet[2182]: I0124 00:29:53.898110 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a13708d5fb5b16df0d003be70ee7155-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a13708d5fb5b16df0d003be70ee7155\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:29:53.898396 kubelet[2182]: I0124 00:29:53.898136 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a13708d5fb5b16df0d003be70ee7155-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a13708d5fb5b16df0d003be70ee7155\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:29:53.898396 kubelet[2182]: I0124 00:29:53.898162 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:29:53.898396 kubelet[2182]: I0124 00:29:53.898190 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:29:53.898396 kubelet[2182]: I0124 00:29:53.898263 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:29:53.898742 kubelet[2182]: I0124 00:29:53.898325 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:29:53.898742 kubelet[2182]: I0124 00:29:53.898493 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a13708d5fb5b16df0d003be70ee7155-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a13708d5fb5b16df0d003be70ee7155\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:29:53.899285 kubelet[2182]: E0124 00:29:53.899235 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Jan 24 00:29:54.072176 kubelet[2182]: I0124 00:29:54.072097 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:29:54.072882 kubelet[2182]: E0124 00:29:54.072761 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Jan 24 00:29:54.166464 kubelet[2182]: E0124 00:29:54.166247 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:54.168014 containerd[1464]: time="2026-01-24T00:29:54.167905897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:54.172907 kubelet[2182]: E0124 00:29:54.172836 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:54.173639 containerd[1464]: time="2026-01-24T00:29:54.173477242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:54.177703 kubelet[2182]: E0124 00:29:54.177463 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:54.178149 containerd[1464]: time="2026-01-24T00:29:54.178124264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a13708d5fb5b16df0d003be70ee7155,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:54.301198 kubelet[2182]: E0124 00:29:54.301072 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Jan 24 00:29:54.476192 kubelet[2182]: I0124 00:29:54.475986 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:29:54.476710 kubelet[2182]: E0124 00:29:54.476347 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Jan 24 00:29:54.584961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565497361.mount: Deactivated successfully. Jan 24 00:29:54.592029 containerd[1464]: time="2026-01-24T00:29:54.591874557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:29:54.595482 containerd[1464]: time="2026-01-24T00:29:54.595378831Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:29:54.596133 containerd[1464]: time="2026-01-24T00:29:54.595929513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:29:54.597176 containerd[1464]: time="2026-01-24T00:29:54.597134375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:29:54.598185 containerd[1464]: time="2026-01-24T00:29:54.598111422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:29:54.598906 containerd[1464]: time="2026-01-24T00:29:54.598831570Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:29:54.599880 containerd[1464]: time="2026-01-24T00:29:54.599776813Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:29:54.604269 containerd[1464]: time="2026-01-24T00:29:54.604206841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:29:54.605176 containerd[1464]: time="2026-01-24T00:29:54.605139099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.555377ms" Jan 24 00:29:54.606718 containerd[1464]: time="2026-01-24T00:29:54.606655937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 428.29261ms" Jan 24 00:29:54.607862 containerd[1464]: time="2026-01-24T00:29:54.607801356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 439.714144ms" Jan 24 00:29:54.612380 kubelet[2182]: E0124 00:29:54.612303 2182 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d834c9c8076a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:29:53.69289079 +0000 UTC m=+0.393851097,LastTimestamp:2026-01-24 00:29:53.69289079 +0000 UTC m=+0.393851097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:29:54.705288 kubelet[2182]: E0124 00:29:54.705102 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.712731845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.712932196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.712969507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.713048576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.712938758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.712971951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.712989645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:29:54.713240 containerd[1464]: time="2026-01-24T00:29:54.713065719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:29:54.720949 containerd[1464]: time="2026-01-24T00:29:54.720785789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:29:54.721049 containerd[1464]: time="2026-01-24T00:29:54.720997582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:29:54.721049 containerd[1464]: time="2026-01-24T00:29:54.721033810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:29:54.721204 containerd[1464]: time="2026-01-24T00:29:54.721144942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:29:54.744288 kubelet[2182]: E0124 00:29:54.744149 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:29:54.748794 systemd[1]: Started cri-containerd-dbbe110fc0f48831bcf84fa8a846b192dd478aac85f4f144ffe2ebcfe9a0fd1a.scope - libcontainer container dbbe110fc0f48831bcf84fa8a846b192dd478aac85f4f144ffe2ebcfe9a0fd1a. Jan 24 00:29:54.753261 systemd[1]: Started cri-containerd-287e9f2dffbe8afc4b150b0cce471ddf5139238bd9ade1c0d54a6308dd3de940.scope - libcontainer container 287e9f2dffbe8afc4b150b0cce471ddf5139238bd9ade1c0d54a6308dd3de940. Jan 24 00:29:54.756111 systemd[1]: Started cri-containerd-c28a0fd6bf3edf73321d2a8759da78bcb0c36ae0c4c9d28cc4120fc6d67718bb.scope - libcontainer container c28a0fd6bf3edf73321d2a8759da78bcb0c36ae0c4c9d28cc4120fc6d67718bb. Jan 24 00:29:54.795907 kubelet[2182]: E0124 00:29:54.795755 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:29:54.809736 containerd[1464]: time="2026-01-24T00:29:54.809661184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbbe110fc0f48831bcf84fa8a846b192dd478aac85f4f144ffe2ebcfe9a0fd1a\"" Jan 24 00:29:54.811669 kubelet[2182]: E0124 00:29:54.811576 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:54.814065 containerd[1464]: time="2026-01-24T00:29:54.813997534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"287e9f2dffbe8afc4b150b0cce471ddf5139238bd9ade1c0d54a6308dd3de940\"" Jan 24 00:29:54.816640 kubelet[2182]: E0124 00:29:54.816465 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:54.820927 containerd[1464]: time="2026-01-24T00:29:54.820840167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a13708d5fb5b16df0d003be70ee7155,Namespace:kube-system,Attempt:0,} returns sandbox id \"c28a0fd6bf3edf73321d2a8759da78bcb0c36ae0c4c9d28cc4120fc6d67718bb\"" Jan 24 00:29:54.822740 containerd[1464]: time="2026-01-24T00:29:54.822210398Z" level=info msg="CreateContainer within sandbox \"dbbe110fc0f48831bcf84fa8a846b192dd478aac85f4f144ffe2ebcfe9a0fd1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:29:54.822819 kubelet[2182]: E0124 00:29:54.822491 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:54.827186 containerd[1464]: time="2026-01-24T00:29:54.827072668Z" level=info msg="CreateContainer within sandbox \"287e9f2dffbe8afc4b150b0cce471ddf5139238bd9ade1c0d54a6308dd3de940\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:29:54.833041 containerd[1464]: time="2026-01-24T00:29:54.832947008Z" level=info msg="CreateContainer within sandbox \"c28a0fd6bf3edf73321d2a8759da78bcb0c36ae0c4c9d28cc4120fc6d67718bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:29:54.850516 containerd[1464]: time="2026-01-24T00:29:54.850421674Z" level=info msg="CreateContainer within sandbox \"dbbe110fc0f48831bcf84fa8a846b192dd478aac85f4f144ffe2ebcfe9a0fd1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4dfa9669bd21bdacc362a19365e3bc76ea6ff33fcbd85c1e21201568d1a83d8a\"" Jan 24 00:29:54.852390 containerd[1464]: time="2026-01-24T00:29:54.852317451Z" level=info msg="StartContainer for \"4dfa9669bd21bdacc362a19365e3bc76ea6ff33fcbd85c1e21201568d1a83d8a\"" Jan 24 00:29:54.857758 containerd[1464]: time="2026-01-24T00:29:54.857693792Z" level=info msg="CreateContainer within sandbox \"287e9f2dffbe8afc4b150b0cce471ddf5139238bd9ade1c0d54a6308dd3de940\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ebe2c920907d13ae6bbb37f6bdb97e937daf02443d824a8a0e8079bcc0536f3\"" Jan 24 00:29:54.858334 containerd[1464]: time="2026-01-24T00:29:54.858313518Z" level=info msg="StartContainer for \"5ebe2c920907d13ae6bbb37f6bdb97e937daf02443d824a8a0e8079bcc0536f3\"" Jan 24 00:29:54.858760 containerd[1464]: time="2026-01-24T00:29:54.858367293Z" level=info msg="CreateContainer within sandbox \"c28a0fd6bf3edf73321d2a8759da78bcb0c36ae0c4c9d28cc4120fc6d67718bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ce565859246008755fe8d844c32ee6e3bc93654cd56a3b2e3af7303e4a9845f7\"" Jan 24 00:29:54.859042 containerd[1464]: time="2026-01-24T00:29:54.859024326Z" level=info msg="StartContainer for \"ce565859246008755fe8d844c32ee6e3bc93654cd56a3b2e3af7303e4a9845f7\"" Jan 24 00:29:54.903771 systemd[1]: Started cri-containerd-4dfa9669bd21bdacc362a19365e3bc76ea6ff33fcbd85c1e21201568d1a83d8a.scope - libcontainer container 4dfa9669bd21bdacc362a19365e3bc76ea6ff33fcbd85c1e21201568d1a83d8a. Jan 24 00:29:54.906451 systemd[1]: Started cri-containerd-ce565859246008755fe8d844c32ee6e3bc93654cd56a3b2e3af7303e4a9845f7.scope - libcontainer container ce565859246008755fe8d844c32ee6e3bc93654cd56a3b2e3af7303e4a9845f7. Jan 24 00:29:54.925789 systemd[1]: Started cri-containerd-5ebe2c920907d13ae6bbb37f6bdb97e937daf02443d824a8a0e8079bcc0536f3.scope - libcontainer container 5ebe2c920907d13ae6bbb37f6bdb97e937daf02443d824a8a0e8079bcc0536f3. Jan 24 00:29:54.962805 containerd[1464]: time="2026-01-24T00:29:54.962769636Z" level=info msg="StartContainer for \"4dfa9669bd21bdacc362a19365e3bc76ea6ff33fcbd85c1e21201568d1a83d8a\" returns successfully" Jan 24 00:29:54.965439 kubelet[2182]: E0124 00:29:54.965356 2182 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:29:54.977892 containerd[1464]: time="2026-01-24T00:29:54.977680129Z" level=info msg="StartContainer for \"ce565859246008755fe8d844c32ee6e3bc93654cd56a3b2e3af7303e4a9845f7\" returns successfully" Jan 24 00:29:54.991005 containerd[1464]: time="2026-01-24T00:29:54.990940043Z" level=info msg="StartContainer for \"5ebe2c920907d13ae6bbb37f6bdb97e937daf02443d824a8a0e8079bcc0536f3\" returns successfully" Jan 24 00:29:55.487000 kubelet[2182]: I0124 00:29:55.486858 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:29:55.763336 kubelet[2182]: E0124 00:29:55.762017 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:55.763336 kubelet[2182]: E0124 00:29:55.762918 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:55.767194 kubelet[2182]: E0124 00:29:55.766847 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:55.768046 kubelet[2182]: E0124 00:29:55.767984 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:55.771981 kubelet[2182]: E0124 00:29:55.771918 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:55.772182 kubelet[2182]: E0124 00:29:55.772129 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:56.865658 kubelet[2182]: E0124 00:29:56.837871 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:56.865658 kubelet[2182]: E0124 00:29:56.838352 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:56.865658 kubelet[2182]: E0124 00:29:56.840018 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:56.865658 kubelet[2182]: E0124 00:29:56.840143 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:57.909872 kubelet[2182]: E0124 00:29:57.908963 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:57.909872 kubelet[2182]: E0124 00:29:57.909539 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:58.920131 kubelet[2182]: E0124 00:29:58.920044 2182 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:29:58.921932 kubelet[2182]: E0124 00:29:58.920315 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:29:59.449289 kubelet[2182]: E0124 00:29:59.449095 2182 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:29:59.665219 kubelet[2182]: I0124 00:29:59.664140 2182 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:29:59.665219 kubelet[2182]: E0124 00:29:59.664235 2182 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:29:59.700560 kubelet[2182]: I0124 00:29:59.699379 2182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:29:59.709679 kubelet[2182]: E0124 00:29:59.709532 2182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:29:59.709679 kubelet[2182]: I0124 00:29:59.709664 2182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:29:59.712704 kubelet[2182]: E0124 00:29:59.712681 2182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:29:59.713083 kubelet[2182]: I0124 00:29:59.712875 2182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:29:59.719908 kubelet[2182]: E0124 00:29:59.719781 2182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:29:59.825909 kubelet[2182]: I0124 00:29:59.825823 2182 apiserver.go:52] "Watching apiserver" Jan 24 00:29:59.897660 kubelet[2182]: I0124 00:29:59.897449 2182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:30:00.131035 kubelet[2182]: I0124 00:30:00.130974 2182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:00.134912 kubelet[2182]: E0124 00:30:00.134864 2182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:00.135256 kubelet[2182]: E0124 00:30:00.135191 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:00.524171 kubelet[2182]: I0124 00:30:00.523459 2182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:00.535276 kubelet[2182]: E0124 00:30:00.535131 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:00.919642 kubelet[2182]: E0124 00:30:00.919500 2182 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:01.874954 systemd[1]: Reloading requested from client PID 2475 ('systemctl') (unit session-7.scope)... Jan 24 00:30:01.874989 systemd[1]: Reloading... Jan 24 00:30:01.947776 zram_generator::config[2517]: No configuration found. Jan 24 00:30:02.082091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:30:02.173574 systemd[1]: Reloading finished in 298 ms. Jan 24 00:30:02.222114 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:30:02.237858 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:30:02.238286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:30:02.238379 systemd[1]: kubelet.service: Consumed 1.948s CPU time, 131.2M memory peak, 0B memory swap peak. Jan 24 00:30:02.252210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:30:02.423785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:30:02.431792 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:30:02.490777 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:30:02.490777 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:30:02.491147 kubelet[2559]: I0124 00:30:02.490799 2559 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:30:02.499891 kubelet[2559]: I0124 00:30:02.499785 2559 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:30:02.499891 kubelet[2559]: I0124 00:30:02.499875 2559 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:30:02.499991 kubelet[2559]: I0124 00:30:02.499906 2559 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:30:02.499991 kubelet[2559]: I0124 00:30:02.499914 2559 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:30:02.500715 kubelet[2559]: I0124 00:30:02.500322 2559 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:30:02.502236 kubelet[2559]: I0124 00:30:02.502125 2559 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:30:02.505559 kubelet[2559]: I0124 00:30:02.505495 2559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:30:02.511321 kubelet[2559]: E0124 00:30:02.511232 2559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:30:02.511405 kubelet[2559]: I0124 00:30:02.511344 2559 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:30:02.518611 kubelet[2559]: I0124 00:30:02.518564 2559 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:30:02.519320 kubelet[2559]: I0124 00:30:02.519124 2559 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:30:02.519320 kubelet[2559]: I0124 00:30:02.519186 2559 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:30:02.519512 kubelet[2559]: I0124 00:30:02.519333 2559 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:30:02.519512 kubelet[2559]: I0124 00:30:02.519341 2559 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:30:02.519512 kubelet[2559]: I0124 00:30:02.519362 2559 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:30:02.520259 kubelet[2559]: I0124 00:30:02.520175 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:30:02.520539 kubelet[2559]: I0124 00:30:02.520458 2559 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:30:02.520539 kubelet[2559]: I0124 00:30:02.520497 2559 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:30:02.520539 kubelet[2559]: I0124 00:30:02.520516 2559 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:30:02.520539 kubelet[2559]: I0124 00:30:02.520531 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:30:02.522305 kubelet[2559]: I0124 00:30:02.522271 2559 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:30:02.523306 kubelet[2559]: I0124 00:30:02.523235 2559 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:30:02.523354 kubelet[2559]: I0124 00:30:02.523308 2559 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:30:02.534386 kubelet[2559]: I0124 00:30:02.534295 2559 server.go:1262] "Started kubelet" Jan 24 00:30:02.535926 kubelet[2559]: I0124 00:30:02.535868 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:30:02.538095 kubelet[2559]: I0124 00:30:02.537389 2559 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:30:02.539360 kubelet[2559]: I0124 00:30:02.539262 2559 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:30:02.539423 kubelet[2559]: I0124 00:30:02.539372 2559 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:30:02.539556 kubelet[2559]: I0124 00:30:02.539535 2559 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:30:02.545057 kubelet[2559]: I0124 00:30:02.539720 2559 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:30:02.545378 kubelet[2559]: I0124 00:30:02.545313 2559 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:30:02.551732 kubelet[2559]: I0124 00:30:02.551488 2559 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:30:02.553159 kubelet[2559]: I0124 00:30:02.553073 2559 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:30:02.553237 kubelet[2559]: I0124 00:30:02.553196 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:30:02.555989 kubelet[2559]: I0124 00:30:02.555956 2559 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:30:02.556354 kubelet[2559]: I0124 00:30:02.556281 2559 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:30:02.556533 kubelet[2559]: I0124 00:30:02.556466 2559 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:30:02.558462 kubelet[2559]: E0124 00:30:02.558404 2559 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:30:02.579215 kubelet[2559]: I0124 00:30:02.578916 2559 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:30:02.580968 kubelet[2559]: I0124 00:30:02.580950 2559 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:30:02.581801 kubelet[2559]: I0124 00:30:02.581747 2559 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:30:02.581801 kubelet[2559]: I0124 00:30:02.581798 2559 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:30:02.581927 kubelet[2559]: E0124 00:30:02.581888 2559 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:30:02.711203 kubelet[2559]: E0124 00:30:02.682367 2559 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:30:02.728073 kubelet[2559]: I0124 00:30:02.727965 2559 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:30:02.728073 kubelet[2559]: I0124 00:30:02.728012 2559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:30:02.728073 kubelet[2559]: I0124 00:30:02.728038 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:30:02.728248 kubelet[2559]: I0124 00:30:02.728196 2559 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:30:02.728248 kubelet[2559]: I0124 00:30:02.728207 2559 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:30:02.728248 kubelet[2559]: I0124 00:30:02.728232 2559 policy_none.go:49] "None policy: Start" Jan 24 00:30:02.728248 kubelet[2559]: I0124 00:30:02.728246 2559 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:30:02.728328 kubelet[2559]: I0124 00:30:02.728261 2559 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:30:02.728427 kubelet[2559]: I0124 00:30:02.728382 2559 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 00:30:02.728459 kubelet[2559]: I0124 00:30:02.728430 2559 policy_none.go:47] "Start" Jan 24 00:30:02.736173 kubelet[2559]: E0124 00:30:02.736093 2559 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:30:02.736418 kubelet[2559]: I0124 00:30:02.736351 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:30:02.736476 kubelet[2559]: I0124 00:30:02.736396 2559 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:30:02.737221 kubelet[2559]: I0124 00:30:02.737195 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:30:02.737953 kubelet[2559]: E0124 00:30:02.737897 2559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:30:02.858555 kubelet[2559]: I0124 00:30:02.858482 2559 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:30:02.869871 kubelet[2559]: I0124 00:30:02.869704 2559 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:30:02.869871 kubelet[2559]: I0124 00:30:02.869852 2559 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:30:02.884334 kubelet[2559]: I0124 00:30:02.884250 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:02.884334 kubelet[2559]: I0124 00:30:02.884291 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:02.884776 kubelet[2559]: I0124 00:30:02.884740 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:30:02.894720 kubelet[2559]: E0124 00:30:02.894655 2559 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.064690 kubelet[2559]: I0124 00:30:03.064246 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.065655 kubelet[2559]: I0124 00:30:03.064877 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.065655 kubelet[2559]: I0124 00:30:03.064984 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.065655 kubelet[2559]: I0124 00:30:03.065216 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a13708d5fb5b16df0d003be70ee7155-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a13708d5fb5b16df0d003be70ee7155\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:03.065655 kubelet[2559]: I0124 00:30:03.065245 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a13708d5fb5b16df0d003be70ee7155-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a13708d5fb5b16df0d003be70ee7155\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:03.065655 kubelet[2559]: I0124 00:30:03.065280 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.065784 kubelet[2559]: I0124 00:30:03.065300 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.065784 kubelet[2559]: I0124 00:30:03.065323 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:30:03.065784 kubelet[2559]: I0124 00:30:03.065344 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a13708d5fb5b16df0d003be70ee7155-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a13708d5fb5b16df0d003be70ee7155\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:03.215897 kubelet[2559]: E0124 00:30:03.215658 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:03.215897 kubelet[2559]: E0124 00:30:03.215927 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:03.217297 kubelet[2559]: E0124 00:30:03.216047 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:03.522449 kubelet[2559]: I0124 00:30:03.522000 2559 apiserver.go:52] "Watching apiserver" Jan 24 00:30:03.560543 kubelet[2559]: I0124 00:30:03.558205 2559 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:30:03.635315 kubelet[2559]: I0124 00:30:03.632258 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:03.635315 kubelet[2559]: I0124 00:30:03.632326 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.636679 kubelet[2559]: E0124 00:30:03.636574 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:03.676982 kubelet[2559]: E0124 00:30:03.676919 2559 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:30:03.682148 kubelet[2559]: E0124 00:30:03.676924 2559 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:30:03.683667 kubelet[2559]: E0124 00:30:03.683571 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:03.684123 kubelet[2559]: E0124 00:30:03.684057 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:03.835398 kubelet[2559]: I0124 00:30:03.835269 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.835251178 podStartE2EDuration="1.835251178s" podCreationTimestamp="2026-01-24 00:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:03.788494653 +0000 UTC m=+1.351061949" watchObservedRunningTime="2026-01-24 00:30:03.835251178 +0000 UTC m=+1.397818445" Jan 24 00:30:03.835398 kubelet[2559]: I0124 00:30:03.835451 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.835446226 podStartE2EDuration="3.835446226s" podCreationTimestamp="2026-01-24 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:03.823297302 +0000 UTC m=+1.385864568" watchObservedRunningTime="2026-01-24 00:30:03.835446226 +0000 UTC m=+1.398013493" Jan 24 00:30:04.634520 kubelet[2559]: E0124 00:30:04.634446 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:04.635096 kubelet[2559]: E0124 00:30:04.634998 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:04.635714 kubelet[2559]: E0124 00:30:04.635641 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:05.285781 kubelet[2559]: I0124 00:30:05.285653 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.285637835 podStartE2EDuration="3.285637835s" podCreationTimestamp="2026-01-24 00:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:03.843663331 +0000 UTC m=+1.406230608" watchObservedRunningTime="2026-01-24 00:30:05.285637835 +0000 UTC m=+2.848205112" Jan 24 00:30:05.637061 kubelet[2559]: E0124 00:30:05.637005 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:05.637832 kubelet[2559]: E0124 00:30:05.637205 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:05.688189 kubelet[2559]: E0124 00:30:05.688126 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:06.651418 kubelet[2559]: E0124 00:30:06.651282 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:06.651418 kubelet[2559]: E0124 00:30:06.651377 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:07.651810 kubelet[2559]: E0124 00:30:07.651756 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:08.451840 kubelet[2559]: I0124 00:30:08.451783 2559 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:30:08.452859 containerd[1464]: time="2026-01-24T00:30:08.452699116Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:30:08.453377 kubelet[2559]: I0124 00:30:08.453168 2559 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:30:09.380791 systemd[1]: Created slice kubepods-besteffort-podd7c5277e_b8f6_4663_a392_aca9f3e6dfda.slice - libcontainer container kubepods-besteffort-podd7c5277e_b8f6_4663_a392_aca9f3e6dfda.slice. Jan 24 00:30:09.424567 kubelet[2559]: I0124 00:30:09.424517 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7c5277e-b8f6-4663-a392-aca9f3e6dfda-kube-proxy\") pod \"kube-proxy-7j9fc\" (UID: \"d7c5277e-b8f6-4663-a392-aca9f3e6dfda\") " pod="kube-system/kube-proxy-7j9fc" Jan 24 00:30:09.424567 kubelet[2559]: I0124 00:30:09.424555 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7c5277e-b8f6-4663-a392-aca9f3e6dfda-xtables-lock\") pod \"kube-proxy-7j9fc\" (UID: \"d7c5277e-b8f6-4663-a392-aca9f3e6dfda\") " pod="kube-system/kube-proxy-7j9fc" Jan 24 00:30:09.425503 kubelet[2559]: I0124 00:30:09.424571 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49zc\" (UniqueName: \"kubernetes.io/projected/d7c5277e-b8f6-4663-a392-aca9f3e6dfda-kube-api-access-z49zc\") pod \"kube-proxy-7j9fc\" (UID: \"d7c5277e-b8f6-4663-a392-aca9f3e6dfda\") " pod="kube-system/kube-proxy-7j9fc" Jan 24 00:30:09.425503 kubelet[2559]: I0124 00:30:09.424666 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7c5277e-b8f6-4663-a392-aca9f3e6dfda-lib-modules\") pod \"kube-proxy-7j9fc\" (UID: \"d7c5277e-b8f6-4663-a392-aca9f3e6dfda\") " pod="kube-system/kube-proxy-7j9fc" Jan 24 00:30:09.593849 systemd[1]: Created slice kubepods-besteffort-podeacc9065_647f_4812_b5cb_d6885adaf594.slice - libcontainer container kubepods-besteffort-podeacc9065_647f_4812_b5cb_d6885adaf594.slice. Jan 24 00:30:09.626950 kubelet[2559]: I0124 00:30:09.626798 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xnz\" (UniqueName: \"kubernetes.io/projected/eacc9065-647f-4812-b5cb-d6885adaf594-kube-api-access-z8xnz\") pod \"tigera-operator-65cdcdfd6d-zrf4v\" (UID: \"eacc9065-647f-4812-b5cb-d6885adaf594\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-zrf4v" Jan 24 00:30:09.626950 kubelet[2559]: I0124 00:30:09.626892 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eacc9065-647f-4812-b5cb-d6885adaf594-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-zrf4v\" (UID: \"eacc9065-647f-4812-b5cb-d6885adaf594\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-zrf4v" Jan 24 00:30:09.693377 kubelet[2559]: E0124 00:30:09.693058 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:09.694276 containerd[1464]: time="2026-01-24T00:30:09.694188723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7j9fc,Uid:d7c5277e-b8f6-4663-a392-aca9f3e6dfda,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:09.741793 containerd[1464]: time="2026-01-24T00:30:09.741338990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:09.741793 containerd[1464]: time="2026-01-24T00:30:09.741416425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:09.741793 containerd[1464]: time="2026-01-24T00:30:09.741431272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.741793 containerd[1464]: time="2026-01-24T00:30:09.741522525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.788449 systemd[1]: Started cri-containerd-90e22a5ce297976325aa91162e86448d150ba9842a3e7cbc860a8259425b1905.scope - libcontainer container 90e22a5ce297976325aa91162e86448d150ba9842a3e7cbc860a8259425b1905. Jan 24 00:30:09.826679 containerd[1464]: time="2026-01-24T00:30:09.826548777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7j9fc,Uid:d7c5277e-b8f6-4663-a392-aca9f3e6dfda,Namespace:kube-system,Attempt:0,} returns sandbox id \"90e22a5ce297976325aa91162e86448d150ba9842a3e7cbc860a8259425b1905\"" Jan 24 00:30:09.827684 kubelet[2559]: E0124 00:30:09.827494 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:09.842235 containerd[1464]: time="2026-01-24T00:30:09.842134652Z" level=info msg="CreateContainer within sandbox \"90e22a5ce297976325aa91162e86448d150ba9842a3e7cbc860a8259425b1905\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:30:09.885200 containerd[1464]: time="2026-01-24T00:30:09.884984973Z" level=info msg="CreateContainer within sandbox \"90e22a5ce297976325aa91162e86448d150ba9842a3e7cbc860a8259425b1905\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"696d6ab9fab31c5795194599b66a4e882b757b11063b5f4e6f8da779e249c3c3\"" Jan 24 00:30:09.887140 containerd[1464]: time="2026-01-24T00:30:09.887090668Z" level=info msg="StartContainer for \"696d6ab9fab31c5795194599b66a4e882b757b11063b5f4e6f8da779e249c3c3\"" Jan 24 00:30:09.901849 containerd[1464]: time="2026-01-24T00:30:09.901723442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-zrf4v,Uid:eacc9065-647f-4812-b5cb-d6885adaf594,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:30:09.954150 systemd[1]: Started cri-containerd-696d6ab9fab31c5795194599b66a4e882b757b11063b5f4e6f8da779e249c3c3.scope - libcontainer container 696d6ab9fab31c5795194599b66a4e882b757b11063b5f4e6f8da779e249c3c3. Jan 24 00:30:09.959075 containerd[1464]: time="2026-01-24T00:30:09.954769193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:09.959075 containerd[1464]: time="2026-01-24T00:30:09.958138158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:09.959075 containerd[1464]: time="2026-01-24T00:30:09.958163506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.959075 containerd[1464]: time="2026-01-24T00:30:09.958307126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:10.004990 systemd[1]: Started cri-containerd-4e312976a531dfca05f3c97cbd6da6c2a38c9ba9c5d1e612da90764841d59f0d.scope - libcontainer container 4e312976a531dfca05f3c97cbd6da6c2a38c9ba9c5d1e612da90764841d59f0d. Jan 24 00:30:10.092470 containerd[1464]: time="2026-01-24T00:30:10.091982029Z" level=info msg="StartContainer for \"696d6ab9fab31c5795194599b66a4e882b757b11063b5f4e6f8da779e249c3c3\" returns successfully" Jan 24 00:30:10.200535 containerd[1464]: time="2026-01-24T00:30:10.200391627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-zrf4v,Uid:eacc9065-647f-4812-b5cb-d6885adaf594,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4e312976a531dfca05f3c97cbd6da6c2a38c9ba9c5d1e612da90764841d59f0d\"" Jan 24 00:30:10.204514 containerd[1464]: time="2026-01-24T00:30:10.204462253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:30:10.691674 kubelet[2559]: E0124 00:30:10.691564 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:10.749536 kubelet[2559]: I0124 00:30:10.749214 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7j9fc" podStartSLOduration=1.749198832 podStartE2EDuration="1.749198832s" podCreationTimestamp="2026-01-24 00:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:10.748930076 +0000 UTC m=+8.311497343" watchObservedRunningTime="2026-01-24 00:30:10.749198832 +0000 UTC m=+8.311766109" Jan 24 00:30:11.125811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080589934.mount: Deactivated successfully. Jan 24 00:30:11.956562 containerd[1464]: time="2026-01-24T00:30:11.956395457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:11.958304 containerd[1464]: time="2026-01-24T00:30:11.958120949Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:30:11.959987 containerd[1464]: time="2026-01-24T00:30:11.959919973Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:11.964228 containerd[1464]: time="2026-01-24T00:30:11.964170653Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:11.965013 containerd[1464]: time="2026-01-24T00:30:11.964893098Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.760368218s" Jan 24 00:30:11.965013 containerd[1464]: time="2026-01-24T00:30:11.964948262Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:30:11.980378 containerd[1464]: time="2026-01-24T00:30:11.980249551Z" level=info msg="CreateContainer within sandbox \"4e312976a531dfca05f3c97cbd6da6c2a38c9ba9c5d1e612da90764841d59f0d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:30:12.000834 containerd[1464]: time="2026-01-24T00:30:12.000715430Z" level=info msg="CreateContainer within sandbox \"4e312976a531dfca05f3c97cbd6da6c2a38c9ba9c5d1e612da90764841d59f0d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5491b05e4464fff54b3ed057c1729fc05885ff1dc3c5ef360819ec5f6f28af7f\"" Jan 24 00:30:12.003539 containerd[1464]: time="2026-01-24T00:30:12.001915888Z" level=info msg="StartContainer for \"5491b05e4464fff54b3ed057c1729fc05885ff1dc3c5ef360819ec5f6f28af7f\"" Jan 24 00:30:12.061840 systemd[1]: Started cri-containerd-5491b05e4464fff54b3ed057c1729fc05885ff1dc3c5ef360819ec5f6f28af7f.scope - libcontainer container 5491b05e4464fff54b3ed057c1729fc05885ff1dc3c5ef360819ec5f6f28af7f. Jan 24 00:30:12.110126 containerd[1464]: time="2026-01-24T00:30:12.110007290Z" level=info msg="StartContainer for \"5491b05e4464fff54b3ed057c1729fc05885ff1dc3c5ef360819ec5f6f28af7f\" returns successfully" Jan 24 00:30:12.726741 kubelet[2559]: I0124 00:30:12.725302 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-zrf4v" podStartSLOduration=1.9612785160000001 podStartE2EDuration="3.725283706s" podCreationTimestamp="2026-01-24 00:30:09 +0000 UTC" firstStartedPulling="2026-01-24 00:30:10.203978151 +0000 UTC m=+7.766545418" lastFinishedPulling="2026-01-24 00:30:11.967983311 +0000 UTC m=+9.530550608" observedRunningTime="2026-01-24 00:30:12.724044973 +0000 UTC m=+10.286612240" watchObservedRunningTime="2026-01-24 00:30:12.725283706 +0000 UTC m=+10.287850972" Jan 24 00:30:13.413662 kubelet[2559]: E0124 00:30:13.411682 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:13.705981 kubelet[2559]: E0124 00:30:13.705838 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:17.806968 sudo[1646]: pam_unix(sudo:session): session closed for user root Jan 24 00:30:17.811024 sshd[1643]: pam_unix(sshd:session): session closed for user core Jan 24 00:30:17.816568 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:46806.service: Deactivated successfully. Jan 24 00:30:17.818111 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:30:17.822442 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:30:17.823318 systemd[1]: session-7.scope: Consumed 12.428s CPU time, 163.4M memory peak, 0B memory swap peak. Jan 24 00:30:17.826118 systemd-logind[1444]: Removed session 7. Jan 24 00:30:22.261165 systemd[1]: Created slice kubepods-besteffort-pod1083f958_5c1f_4559_b6e9_db5416bdf376.slice - libcontainer container kubepods-besteffort-pod1083f958_5c1f_4559_b6e9_db5416bdf376.slice. Jan 24 00:30:22.327856 kubelet[2559]: I0124 00:30:22.327761 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1083f958-5c1f-4559-b6e9-db5416bdf376-tigera-ca-bundle\") pod \"calico-typha-77cf79547d-snpk2\" (UID: \"1083f958-5c1f-4559-b6e9-db5416bdf376\") " pod="calico-system/calico-typha-77cf79547d-snpk2" Jan 24 00:30:22.327856 kubelet[2559]: I0124 00:30:22.327841 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1083f958-5c1f-4559-b6e9-db5416bdf376-typha-certs\") pod \"calico-typha-77cf79547d-snpk2\" (UID: \"1083f958-5c1f-4559-b6e9-db5416bdf376\") " pod="calico-system/calico-typha-77cf79547d-snpk2" Jan 24 00:30:22.328428 kubelet[2559]: I0124 00:30:22.327909 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w56cx\" (UniqueName: \"kubernetes.io/projected/1083f958-5c1f-4559-b6e9-db5416bdf376-kube-api-access-w56cx\") pod \"calico-typha-77cf79547d-snpk2\" (UID: \"1083f958-5c1f-4559-b6e9-db5416bdf376\") " pod="calico-system/calico-typha-77cf79547d-snpk2" Jan 24 00:30:22.455084 systemd[1]: Created slice kubepods-besteffort-podf24745a8_13cf_4f53_bee2_fa3ad659d190.slice - libcontainer container kubepods-besteffort-podf24745a8_13cf_4f53_bee2_fa3ad659d190.slice. Jan 24 00:30:22.529784 kubelet[2559]: I0124 00:30:22.529256 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-cni-net-dir\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529784 kubelet[2559]: I0124 00:30:22.529333 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f24745a8-13cf-4f53-bee2-fa3ad659d190-tigera-ca-bundle\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529784 kubelet[2559]: I0124 00:30:22.529351 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-xtables-lock\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529784 kubelet[2559]: I0124 00:30:22.529365 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f24745a8-13cf-4f53-bee2-fa3ad659d190-node-certs\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529784 kubelet[2559]: I0124 00:30:22.529378 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-cni-bin-dir\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529974 kubelet[2559]: I0124 00:30:22.529391 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-cni-log-dir\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529974 kubelet[2559]: I0124 00:30:22.529402 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-flexvol-driver-host\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529974 kubelet[2559]: I0124 00:30:22.529416 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db5q5\" (UniqueName: \"kubernetes.io/projected/f24745a8-13cf-4f53-bee2-fa3ad659d190-kube-api-access-db5q5\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529974 kubelet[2559]: I0124 00:30:22.529428 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-lib-modules\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.529974 kubelet[2559]: I0124 00:30:22.529439 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-var-run-calico\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.530081 kubelet[2559]: I0124 00:30:22.529452 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-policysync\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.530081 kubelet[2559]: I0124 00:30:22.529464 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f24745a8-13cf-4f53-bee2-fa3ad659d190-var-lib-calico\") pod \"calico-node-m8w55\" (UID: \"f24745a8-13cf-4f53-bee2-fa3ad659d190\") " pod="calico-system/calico-node-m8w55" Jan 24 00:30:22.569236 kubelet[2559]: E0124 00:30:22.569160 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:22.570293 containerd[1464]: time="2026-01-24T00:30:22.570171390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77cf79547d-snpk2,Uid:1083f958-5c1f-4559-b6e9-db5416bdf376,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:22.601773 containerd[1464]: time="2026-01-24T00:30:22.601705281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:22.601773 containerd[1464]: time="2026-01-24T00:30:22.601743123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:22.603010 containerd[1464]: time="2026-01-24T00:30:22.602414585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:22.603824 containerd[1464]: time="2026-01-24T00:30:22.603693718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:22.638379 kubelet[2559]: E0124 00:30:22.636659 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:22.641010 kubelet[2559]: E0124 00:30:22.640880 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.641010 kubelet[2559]: W0124 00:30:22.641009 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.641215 kubelet[2559]: E0124 00:30:22.641032 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.643134 kubelet[2559]: E0124 00:30:22.642474 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.643134 kubelet[2559]: W0124 00:30:22.642631 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.643134 kubelet[2559]: E0124 00:30:22.642647 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.645822 systemd[1]: Started cri-containerd-1e72f015d182ea7791f5279d13fc6d3ea857498aac792b3ad225af05047e6416.scope - libcontainer container 1e72f015d182ea7791f5279d13fc6d3ea857498aac792b3ad225af05047e6416. Jan 24 00:30:22.662131 kubelet[2559]: E0124 00:30:22.662104 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.662380 kubelet[2559]: W0124 00:30:22.662262 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.662380 kubelet[2559]: E0124 00:30:22.662291 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.718781 containerd[1464]: time="2026-01-24T00:30:22.718658984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77cf79547d-snpk2,Uid:1083f958-5c1f-4559-b6e9-db5416bdf376,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e72f015d182ea7791f5279d13fc6d3ea857498aac792b3ad225af05047e6416\"" Jan 24 00:30:22.723647 kubelet[2559]: E0124 00:30:22.723554 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:22.726288 containerd[1464]: time="2026-01-24T00:30:22.726180247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:30:22.730244 kubelet[2559]: E0124 00:30:22.730214 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.731258 kubelet[2559]: W0124 00:30:22.730245 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.731258 kubelet[2559]: E0124 00:30:22.730264 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.731258 kubelet[2559]: E0124 00:30:22.730680 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.731258 kubelet[2559]: W0124 00:30:22.730690 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.731258 kubelet[2559]: E0124 00:30:22.730702 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.731854 kubelet[2559]: E0124 00:30:22.731742 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.731854 kubelet[2559]: W0124 00:30:22.731816 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.731854 kubelet[2559]: E0124 00:30:22.731828 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.732365 kubelet[2559]: E0124 00:30:22.732271 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.732412 kubelet[2559]: W0124 00:30:22.732387 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.732412 kubelet[2559]: E0124 00:30:22.732401 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.732945 kubelet[2559]: E0124 00:30:22.732886 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.732945 kubelet[2559]: W0124 00:30:22.732897 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.732945 kubelet[2559]: E0124 00:30:22.732907 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.734420 kubelet[2559]: E0124 00:30:22.733668 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.734420 kubelet[2559]: W0124 00:30:22.733701 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.734420 kubelet[2559]: E0124 00:30:22.733711 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.734420 kubelet[2559]: E0124 00:30:22.733925 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.734420 kubelet[2559]: W0124 00:30:22.733932 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.734420 kubelet[2559]: E0124 00:30:22.733940 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.734420 kubelet[2559]: E0124 00:30:22.734114 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.734420 kubelet[2559]: W0124 00:30:22.734121 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.734420 kubelet[2559]: E0124 00:30:22.734129 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.734420 kubelet[2559]: E0124 00:30:22.734391 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.734749 kubelet[2559]: W0124 00:30:22.734400 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.734749 kubelet[2559]: E0124 00:30:22.734409 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.734790 kubelet[2559]: E0124 00:30:22.734781 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.734812 kubelet[2559]: W0124 00:30:22.734795 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.734830 kubelet[2559]: E0124 00:30:22.734808 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.735237 kubelet[2559]: E0124 00:30:22.735207 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.735350 kubelet[2559]: W0124 00:30:22.735240 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.735350 kubelet[2559]: E0124 00:30:22.735253 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.735846 kubelet[2559]: E0124 00:30:22.735743 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.735846 kubelet[2559]: W0124 00:30:22.735778 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.735846 kubelet[2559]: E0124 00:30:22.735791 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.736210 kubelet[2559]: E0124 00:30:22.736124 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.736210 kubelet[2559]: W0124 00:30:22.736163 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.736210 kubelet[2559]: E0124 00:30:22.736176 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.736523 kubelet[2559]: E0124 00:30:22.736506 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.737053 kubelet[2559]: W0124 00:30:22.736696 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.737053 kubelet[2559]: E0124 00:30:22.736714 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.737528 kubelet[2559]: E0124 00:30:22.737511 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.737734 kubelet[2559]: W0124 00:30:22.737717 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.737806 kubelet[2559]: E0124 00:30:22.737792 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.738405 kubelet[2559]: E0124 00:30:22.738189 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.738405 kubelet[2559]: W0124 00:30:22.738203 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.738405 kubelet[2559]: E0124 00:30:22.738215 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.738714 kubelet[2559]: E0124 00:30:22.738699 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.738793 kubelet[2559]: W0124 00:30:22.738780 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.738854 kubelet[2559]: E0124 00:30:22.738842 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.739296 kubelet[2559]: E0124 00:30:22.739281 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.739502 kubelet[2559]: W0124 00:30:22.739425 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.739502 kubelet[2559]: E0124 00:30:22.739442 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.740029 kubelet[2559]: E0124 00:30:22.740014 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.740245 kubelet[2559]: W0124 00:30:22.740096 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.740245 kubelet[2559]: E0124 00:30:22.740113 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.740719 kubelet[2559]: E0124 00:30:22.740670 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.740719 kubelet[2559]: W0124 00:30:22.740686 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.740719 kubelet[2559]: E0124 00:30:22.740697 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.741695 kubelet[2559]: E0124 00:30:22.741500 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.741695 kubelet[2559]: W0124 00:30:22.741519 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.741695 kubelet[2559]: E0124 00:30:22.741533 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.741695 kubelet[2559]: I0124 00:30:22.741561 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40661dc3-d91f-42a2-a397-77dbe1e37cee-varrun\") pod \"csi-node-driver-frc8p\" (UID: \"40661dc3-d91f-42a2-a397-77dbe1e37cee\") " pod="calico-system/csi-node-driver-frc8p" Jan 24 00:30:22.744294 kubelet[2559]: E0124 00:30:22.743983 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.744294 kubelet[2559]: W0124 00:30:22.743999 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.744294 kubelet[2559]: E0124 00:30:22.744011 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.745573 kubelet[2559]: E0124 00:30:22.745415 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.745573 kubelet[2559]: W0124 00:30:22.745430 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.745573 kubelet[2559]: E0124 00:30:22.745442 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.746090 kubelet[2559]: I0124 00:30:22.745883 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40661dc3-d91f-42a2-a397-77dbe1e37cee-registration-dir\") pod \"csi-node-driver-frc8p\" (UID: \"40661dc3-d91f-42a2-a397-77dbe1e37cee\") " pod="calico-system/csi-node-driver-frc8p" Jan 24 00:30:22.746090 kubelet[2559]: E0124 00:30:22.746049 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.746090 kubelet[2559]: W0124 00:30:22.746059 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.746090 kubelet[2559]: E0124 00:30:22.746071 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.754222 kubelet[2559]: E0124 00:30:22.754065 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.754222 kubelet[2559]: W0124 00:30:22.754082 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.754222 kubelet[2559]: E0124 00:30:22.754094 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.754720 kubelet[2559]: E0124 00:30:22.754664 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.754720 kubelet[2559]: W0124 00:30:22.754680 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.754720 kubelet[2559]: E0124 00:30:22.754692 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.755407 kubelet[2559]: E0124 00:30:22.755277 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.755407 kubelet[2559]: W0124 00:30:22.755380 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.755407 kubelet[2559]: E0124 00:30:22.755392 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.755407 kubelet[2559]: I0124 00:30:22.755408 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40661dc3-d91f-42a2-a397-77dbe1e37cee-socket-dir\") pod \"csi-node-driver-frc8p\" (UID: \"40661dc3-d91f-42a2-a397-77dbe1e37cee\") " pod="calico-system/csi-node-driver-frc8p" Jan 24 00:30:22.755936 kubelet[2559]: E0124 00:30:22.755910 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.755936 kubelet[2559]: W0124 00:30:22.755923 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.755936 kubelet[2559]: E0124 00:30:22.755934 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.756251 kubelet[2559]: I0124 00:30:22.756023 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40661dc3-d91f-42a2-a397-77dbe1e37cee-kubelet-dir\") pod \"csi-node-driver-frc8p\" (UID: \"40661dc3-d91f-42a2-a397-77dbe1e37cee\") " pod="calico-system/csi-node-driver-frc8p" Jan 24 00:30:22.756745 kubelet[2559]: E0124 00:30:22.756692 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.756745 kubelet[2559]: W0124 00:30:22.756706 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.756745 kubelet[2559]: E0124 00:30:22.756715 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.756745 kubelet[2559]: I0124 00:30:22.756733 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck7kg\" (UniqueName: \"kubernetes.io/projected/40661dc3-d91f-42a2-a397-77dbe1e37cee-kube-api-access-ck7kg\") pod \"csi-node-driver-frc8p\" (UID: \"40661dc3-d91f-42a2-a397-77dbe1e37cee\") " pod="calico-system/csi-node-driver-frc8p" Jan 24 00:30:22.757145 kubelet[2559]: E0124 00:30:22.757081 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.757351 kubelet[2559]: W0124 00:30:22.757199 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.757351 kubelet[2559]: E0124 00:30:22.757213 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.758055 kubelet[2559]: E0124 00:30:22.757981 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.758055 kubelet[2559]: W0124 00:30:22.757992 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.758055 kubelet[2559]: E0124 00:30:22.758001 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.759781 kubelet[2559]: E0124 00:30:22.759747 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.759781 kubelet[2559]: W0124 00:30:22.759781 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.760305 kubelet[2559]: E0124 00:30:22.759791 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.761079 kubelet[2559]: E0124 00:30:22.761064 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.761079 kubelet[2559]: W0124 00:30:22.761076 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.761476 kubelet[2559]: E0124 00:30:22.761087 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.761476 kubelet[2559]: E0124 00:30:22.761440 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.761476 kubelet[2559]: W0124 00:30:22.761449 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.761476 kubelet[2559]: E0124 00:30:22.761459 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.762163 kubelet[2559]: E0124 00:30:22.762045 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.762163 kubelet[2559]: W0124 00:30:22.762092 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.762163 kubelet[2559]: E0124 00:30:22.762121 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.763718 kubelet[2559]: E0124 00:30:22.763576 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:22.764412 containerd[1464]: time="2026-01-24T00:30:22.764296312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8w55,Uid:f24745a8-13cf-4f53-bee2-fa3ad659d190,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:22.812491 containerd[1464]: time="2026-01-24T00:30:22.811951445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:22.812491 containerd[1464]: time="2026-01-24T00:30:22.812053255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:22.812491 containerd[1464]: time="2026-01-24T00:30:22.812063725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:22.812491 containerd[1464]: time="2026-01-24T00:30:22.812247521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:22.834779 systemd[1]: Started cri-containerd-dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520.scope - libcontainer container dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520. Jan 24 00:30:22.858209 kubelet[2559]: E0124 00:30:22.858178 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.858209 kubelet[2559]: W0124 00:30:22.858197 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.858384 kubelet[2559]: E0124 00:30:22.858275 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.858796 kubelet[2559]: E0124 00:30:22.858737 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.858796 kubelet[2559]: W0124 00:30:22.858767 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.858796 kubelet[2559]: E0124 00:30:22.858778 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.859256 kubelet[2559]: E0124 00:30:22.859200 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.859256 kubelet[2559]: W0124 00:30:22.859224 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.859256 kubelet[2559]: E0124 00:30:22.859234 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.859826 kubelet[2559]: E0124 00:30:22.859802 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.859826 kubelet[2559]: W0124 00:30:22.859826 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.859937 kubelet[2559]: E0124 00:30:22.859835 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.860511 kubelet[2559]: E0124 00:30:22.860492 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.860511 kubelet[2559]: W0124 00:30:22.860502 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.860511 kubelet[2559]: E0124 00:30:22.860510 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.861201 kubelet[2559]: E0124 00:30:22.861141 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.861342 kubelet[2559]: W0124 00:30:22.861229 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.861342 kubelet[2559]: E0124 00:30:22.861253 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.862183 kubelet[2559]: E0124 00:30:22.861961 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.862183 kubelet[2559]: W0124 00:30:22.862183 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.862183 kubelet[2559]: E0124 00:30:22.862201 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.863048 kubelet[2559]: E0124 00:30:22.863026 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.863162 kubelet[2559]: W0124 00:30:22.863038 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.863162 kubelet[2559]: E0124 00:30:22.863087 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.863959 kubelet[2559]: E0124 00:30:22.863882 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.863959 kubelet[2559]: W0124 00:30:22.863958 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.864042 kubelet[2559]: E0124 00:30:22.863967 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.864446 kubelet[2559]: E0124 00:30:22.864412 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.864473 kubelet[2559]: W0124 00:30:22.864448 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.864473 kubelet[2559]: E0124 00:30:22.864468 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.864963 kubelet[2559]: E0124 00:30:22.864930 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.864963 kubelet[2559]: W0124 00:30:22.864963 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.865061 kubelet[2559]: E0124 00:30:22.864976 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.865730 kubelet[2559]: E0124 00:30:22.865703 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.865730 kubelet[2559]: W0124 00:30:22.865729 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.865833 kubelet[2559]: E0124 00:30:22.865740 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.866331 kubelet[2559]: E0124 00:30:22.866214 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.866331 kubelet[2559]: W0124 00:30:22.866244 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.866331 kubelet[2559]: E0124 00:30:22.866254 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.866845 kubelet[2559]: E0124 00:30:22.866783 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.866845 kubelet[2559]: W0124 00:30:22.866812 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.866845 kubelet[2559]: E0124 00:30:22.866821 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.867114 kubelet[2559]: E0124 00:30:22.867083 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.867114 kubelet[2559]: W0124 00:30:22.867111 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.867160 kubelet[2559]: E0124 00:30:22.867120 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.867716 kubelet[2559]: E0124 00:30:22.867687 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.867716 kubelet[2559]: W0124 00:30:22.867715 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.867817 kubelet[2559]: E0124 00:30:22.867725 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.868183 kubelet[2559]: E0124 00:30:22.868096 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.868183 kubelet[2559]: W0124 00:30:22.868126 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.868183 kubelet[2559]: E0124 00:30:22.868135 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.868549 kubelet[2559]: E0124 00:30:22.868492 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.868549 kubelet[2559]: W0124 00:30:22.868525 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.868549 kubelet[2559]: E0124 00:30:22.868534 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.869120 kubelet[2559]: E0124 00:30:22.869033 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.869120 kubelet[2559]: W0124 00:30:22.869062 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.869120 kubelet[2559]: E0124 00:30:22.869074 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.869511 kubelet[2559]: E0124 00:30:22.869484 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.869617 kubelet[2559]: W0124 00:30:22.869512 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.869617 kubelet[2559]: E0124 00:30:22.869521 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.870042 kubelet[2559]: E0124 00:30:22.869943 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.870042 kubelet[2559]: W0124 00:30:22.869969 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.870042 kubelet[2559]: E0124 00:30:22.869977 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.870569 kubelet[2559]: E0124 00:30:22.870478 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.870569 kubelet[2559]: W0124 00:30:22.870508 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.870569 kubelet[2559]: E0124 00:30:22.870517 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.871177 kubelet[2559]: E0124 00:30:22.871149 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.871209 kubelet[2559]: W0124 00:30:22.871176 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.871209 kubelet[2559]: E0124 00:30:22.871188 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.871549 kubelet[2559]: E0124 00:30:22.871518 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.871549 kubelet[2559]: W0124 00:30:22.871546 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.871673 kubelet[2559]: E0124 00:30:22.871557 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.873369 kubelet[2559]: E0124 00:30:22.873262 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.873369 kubelet[2559]: W0124 00:30:22.873286 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.873369 kubelet[2559]: E0124 00:30:22.873297 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:22.874058 containerd[1464]: time="2026-01-24T00:30:22.874007298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8w55,Uid:f24745a8-13cf-4f53-bee2-fa3ad659d190,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520\"" Jan 24 00:30:22.876205 kubelet[2559]: E0124 00:30:22.876119 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:22.876205 kubelet[2559]: E0124 00:30:22.876164 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:22.876205 kubelet[2559]: W0124 00:30:22.876174 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:22.876205 kubelet[2559]: E0124 00:30:22.876183 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:23.724712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677877146.mount: Deactivated successfully. Jan 24 00:30:24.407705 containerd[1464]: time="2026-01-24T00:30:24.407523999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:24.409000 containerd[1464]: time="2026-01-24T00:30:24.408908220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:30:24.410825 containerd[1464]: time="2026-01-24T00:30:24.410783289Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:24.417310 containerd[1464]: time="2026-01-24T00:30:24.417219341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:24.418028 containerd[1464]: time="2026-01-24T00:30:24.417968348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.691756111s" Jan 24 00:30:24.418028 containerd[1464]: time="2026-01-24T00:30:24.418010927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:30:24.419228 containerd[1464]: time="2026-01-24T00:30:24.419130123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:30:24.435963 containerd[1464]: time="2026-01-24T00:30:24.435865740Z" level=info msg="CreateContainer within sandbox \"1e72f015d182ea7791f5279d13fc6d3ea857498aac792b3ad225af05047e6416\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:30:24.457130 containerd[1464]: time="2026-01-24T00:30:24.457039964Z" level=info msg="CreateContainer within sandbox \"1e72f015d182ea7791f5279d13fc6d3ea857498aac792b3ad225af05047e6416\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1cc18aaad874328b098e116933873bfe65a6b203e6e620e9a81090a87e5018d0\"" Jan 24 00:30:24.458314 containerd[1464]: time="2026-01-24T00:30:24.458183204Z" level=info msg="StartContainer for \"1cc18aaad874328b098e116933873bfe65a6b203e6e620e9a81090a87e5018d0\"" Jan 24 00:30:24.514788 systemd[1]: Started cri-containerd-1cc18aaad874328b098e116933873bfe65a6b203e6e620e9a81090a87e5018d0.scope - libcontainer container 1cc18aaad874328b098e116933873bfe65a6b203e6e620e9a81090a87e5018d0. Jan 24 00:30:24.583253 kubelet[2559]: E0124 00:30:24.583208 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:24.594934 containerd[1464]: time="2026-01-24T00:30:24.594835308Z" level=info msg="StartContainer for \"1cc18aaad874328b098e116933873bfe65a6b203e6e620e9a81090a87e5018d0\" returns successfully" Jan 24 00:30:24.756066 kubelet[2559]: E0124 00:30:24.755393 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:24.863656 kubelet[2559]: E0124 00:30:24.863518 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.863656 kubelet[2559]: W0124 00:30:24.863544 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.863656 kubelet[2559]: E0124 00:30:24.863564 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.865750 kubelet[2559]: E0124 00:30:24.865723 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.866023 kubelet[2559]: W0124 00:30:24.865911 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.866023 kubelet[2559]: E0124 00:30:24.865943 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.866891 kubelet[2559]: E0124 00:30:24.866554 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.866891 kubelet[2559]: W0124 00:30:24.866571 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.866891 kubelet[2559]: E0124 00:30:24.866709 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.868242 kubelet[2559]: E0124 00:30:24.868118 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.868493 kubelet[2559]: W0124 00:30:24.868473 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.869085 kubelet[2559]: E0124 00:30:24.869063 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.870735 kubelet[2559]: E0124 00:30:24.870715 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.872718 kubelet[2559]: W0124 00:30:24.872691 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.872925 kubelet[2559]: E0124 00:30:24.872845 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.873748 kubelet[2559]: E0124 00:30:24.873517 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.873748 kubelet[2559]: W0124 00:30:24.873534 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.873748 kubelet[2559]: E0124 00:30:24.873547 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.875523 kubelet[2559]: E0124 00:30:24.874339 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.875850 kubelet[2559]: W0124 00:30:24.875672 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.875850 kubelet[2559]: E0124 00:30:24.875697 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.876719 kubelet[2559]: E0124 00:30:24.876702 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.876889 kubelet[2559]: W0124 00:30:24.876799 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.876889 kubelet[2559]: E0124 00:30:24.876819 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.877346 kubelet[2559]: E0124 00:30:24.877272 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.877346 kubelet[2559]: W0124 00:30:24.877285 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.877346 kubelet[2559]: E0124 00:30:24.877296 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.880019 kubelet[2559]: E0124 00:30:24.879938 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.880019 kubelet[2559]: W0124 00:30:24.879952 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.880019 kubelet[2559]: E0124 00:30:24.879965 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.880573 kubelet[2559]: E0124 00:30:24.880446 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.880573 kubelet[2559]: W0124 00:30:24.880459 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.880573 kubelet[2559]: E0124 00:30:24.880470 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.881156 kubelet[2559]: E0124 00:30:24.880989 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.881156 kubelet[2559]: W0124 00:30:24.881002 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.881156 kubelet[2559]: E0124 00:30:24.881015 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.881889 kubelet[2559]: E0124 00:30:24.881742 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.881889 kubelet[2559]: W0124 00:30:24.881756 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.881889 kubelet[2559]: E0124 00:30:24.881767 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.882871 kubelet[2559]: E0124 00:30:24.882421 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.882871 kubelet[2559]: W0124 00:30:24.882435 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.882871 kubelet[2559]: E0124 00:30:24.882446 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.883826 kubelet[2559]: E0124 00:30:24.883745 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.883826 kubelet[2559]: W0124 00:30:24.883759 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.883826 kubelet[2559]: E0124 00:30:24.883771 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.887108 kubelet[2559]: E0124 00:30:24.886988 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.887108 kubelet[2559]: W0124 00:30:24.887005 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.887108 kubelet[2559]: E0124 00:30:24.887020 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.888062 kubelet[2559]: E0124 00:30:24.887899 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.888062 kubelet[2559]: W0124 00:30:24.887913 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.888062 kubelet[2559]: E0124 00:30:24.887924 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.888849 kubelet[2559]: E0124 00:30:24.888762 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.888849 kubelet[2559]: W0124 00:30:24.888776 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.888849 kubelet[2559]: E0124 00:30:24.888788 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.889575 kubelet[2559]: E0124 00:30:24.889423 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.889575 kubelet[2559]: W0124 00:30:24.889437 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.889575 kubelet[2559]: E0124 00:30:24.889448 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.890245 kubelet[2559]: E0124 00:30:24.889905 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.890245 kubelet[2559]: W0124 00:30:24.889917 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.890245 kubelet[2559]: E0124 00:30:24.889927 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.890972 kubelet[2559]: E0124 00:30:24.890921 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.890972 kubelet[2559]: W0124 00:30:24.890964 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.891035 kubelet[2559]: E0124 00:30:24.890977 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.893156 kubelet[2559]: E0124 00:30:24.893096 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.893156 kubelet[2559]: W0124 00:30:24.893138 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.893156 kubelet[2559]: E0124 00:30:24.893153 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.893786 kubelet[2559]: E0124 00:30:24.893728 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.893786 kubelet[2559]: W0124 00:30:24.893773 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.893786 kubelet[2559]: E0124 00:30:24.893787 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.896823 kubelet[2559]: E0124 00:30:24.896703 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.896823 kubelet[2559]: W0124 00:30:24.896749 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.896823 kubelet[2559]: E0124 00:30:24.896763 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.897464 kubelet[2559]: E0124 00:30:24.897059 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.897464 kubelet[2559]: W0124 00:30:24.897069 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.897464 kubelet[2559]: E0124 00:30:24.897081 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.898691 kubelet[2559]: E0124 00:30:24.897743 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.898691 kubelet[2559]: W0124 00:30:24.897757 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.898691 kubelet[2559]: E0124 00:30:24.897769 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.898691 kubelet[2559]: E0124 00:30:24.898507 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.898691 kubelet[2559]: W0124 00:30:24.898520 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.898691 kubelet[2559]: E0124 00:30:24.898533 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.900809 kubelet[2559]: E0124 00:30:24.900710 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.900809 kubelet[2559]: W0124 00:30:24.900757 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.900809 kubelet[2559]: E0124 00:30:24.900770 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.902296 kubelet[2559]: E0124 00:30:24.902070 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.902296 kubelet[2559]: W0124 00:30:24.902123 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.902296 kubelet[2559]: E0124 00:30:24.902138 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.905709 kubelet[2559]: E0124 00:30:24.905541 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.905709 kubelet[2559]: W0124 00:30:24.905663 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.905709 kubelet[2559]: E0124 00:30:24.905679 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.906493 kubelet[2559]: E0124 00:30:24.906246 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.906493 kubelet[2559]: W0124 00:30:24.906291 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.906493 kubelet[2559]: E0124 00:30:24.906305 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.909879 kubelet[2559]: E0124 00:30:24.909789 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.909879 kubelet[2559]: W0124 00:30:24.909839 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.909879 kubelet[2559]: E0124 00:30:24.909857 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:24.911239 kubelet[2559]: E0124 00:30:24.911132 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:24.911239 kubelet[2559]: W0124 00:30:24.911169 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:24.911239 kubelet[2559]: E0124 00:30:24.911182 2559 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:25.000012 containerd[1464]: time="2026-01-24T00:30:24.999906419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:25.001243 containerd[1464]: time="2026-01-24T00:30:25.001086137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:30:25.002792 containerd[1464]: time="2026-01-24T00:30:25.002750308Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:25.005652 containerd[1464]: time="2026-01-24T00:30:25.005345982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:25.006335 containerd[1464]: time="2026-01-24T00:30:25.006180640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 587.015953ms" Jan 24 00:30:25.006335 containerd[1464]: time="2026-01-24T00:30:25.006254879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:30:25.013449 containerd[1464]: time="2026-01-24T00:30:25.013310683Z" level=info msg="CreateContainer within sandbox \"dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:30:25.033024 containerd[1464]: time="2026-01-24T00:30:25.032934392Z" level=info msg="CreateContainer within sandbox \"dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171\"" Jan 24 00:30:25.033822 containerd[1464]: time="2026-01-24T00:30:25.033728207Z" level=info msg="StartContainer for \"c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171\"" Jan 24 00:30:25.091827 systemd[1]: Started cri-containerd-c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171.scope - libcontainer container c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171. Jan 24 00:30:25.144758 containerd[1464]: time="2026-01-24T00:30:25.144696026Z" level=info msg="StartContainer for \"c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171\" returns successfully" Jan 24 00:30:25.170648 systemd[1]: cri-containerd-c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171.scope: Deactivated successfully. Jan 24 00:30:25.328122 containerd[1464]: time="2026-01-24T00:30:25.324647686Z" level=info msg="shim disconnected" id=c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171 namespace=k8s.io Jan 24 00:30:25.328122 containerd[1464]: time="2026-01-24T00:30:25.328072315Z" level=warning msg="cleaning up after shim disconnected" id=c6d306c4c7e4b5a784e0436fe449df4958f9cdbce03dd32c07e3d50cfa47b171 namespace=k8s.io Jan 24 00:30:25.328122 containerd[1464]: time="2026-01-24T00:30:25.328088314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:30:25.758784 kubelet[2559]: I0124 00:30:25.757505 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:30:25.758784 kubelet[2559]: E0124 00:30:25.757851 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:25.758784 kubelet[2559]: E0124 00:30:25.758076 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:25.760055 containerd[1464]: time="2026-01-24T00:30:25.759986844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:30:25.790650 kubelet[2559]: I0124 00:30:25.789801 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77cf79547d-snpk2" podStartSLOduration=2.096175459 podStartE2EDuration="3.789784381s" podCreationTimestamp="2026-01-24 00:30:22 +0000 UTC" firstStartedPulling="2026-01-24 00:30:22.725280887 +0000 UTC m=+20.287848154" lastFinishedPulling="2026-01-24 00:30:24.418889809 +0000 UTC m=+21.981457076" observedRunningTime="2026-01-24 00:30:24.774262442 +0000 UTC m=+22.336829709" watchObservedRunningTime="2026-01-24 00:30:25.789784381 +0000 UTC m=+23.352351648" Jan 24 00:30:26.588108 kubelet[2559]: E0124 00:30:26.587999 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:28.139210 containerd[1464]: time="2026-01-24T00:30:28.139056503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:28.140682 containerd[1464]: time="2026-01-24T00:30:28.140533233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:30:28.142205 containerd[1464]: time="2026-01-24T00:30:28.142139920Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:28.145546 containerd[1464]: time="2026-01-24T00:30:28.145410481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:28.146295 containerd[1464]: time="2026-01-24T00:30:28.146212422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.386143222s" Jan 24 00:30:28.146295 containerd[1464]: time="2026-01-24T00:30:28.146281782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:30:28.153734 containerd[1464]: time="2026-01-24T00:30:28.153678310Z" level=info msg="CreateContainer within sandbox \"dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:30:28.174021 containerd[1464]: time="2026-01-24T00:30:28.173922320Z" level=info msg="CreateContainer within sandbox \"dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019\"" Jan 24 00:30:28.175028 containerd[1464]: time="2026-01-24T00:30:28.174858307Z" level=info msg="StartContainer for \"f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019\"" Jan 24 00:30:28.271901 systemd[1]: Started cri-containerd-f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019.scope - libcontainer container f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019. Jan 24 00:30:28.322349 containerd[1464]: time="2026-01-24T00:30:28.322270178Z" level=info msg="StartContainer for \"f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019\" returns successfully" Jan 24 00:30:28.583058 kubelet[2559]: E0124 00:30:28.582767 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:28.774309 kubelet[2559]: E0124 00:30:28.773903 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:29.471963 systemd[1]: cri-containerd-f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019.scope: Deactivated successfully. Jan 24 00:30:29.472283 systemd[1]: cri-containerd-f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019.scope: Consumed 1.312s CPU time. Jan 24 00:30:29.532871 kubelet[2559]: I0124 00:30:29.531217 2559 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:30:29.533559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019-rootfs.mount: Deactivated successfully. Jan 24 00:30:29.555190 containerd[1464]: time="2026-01-24T00:30:29.555054652Z" level=info msg="shim disconnected" id=f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019 namespace=k8s.io Jan 24 00:30:29.555190 containerd[1464]: time="2026-01-24T00:30:29.555160711Z" level=warning msg="cleaning up after shim disconnected" id=f7c0374604384b7dab87195d69f4732700a4b1b3e8bcf63bb3aeb713a6be2019 namespace=k8s.io Jan 24 00:30:29.555190 containerd[1464]: time="2026-01-24T00:30:29.555174126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:30:29.634549 systemd[1]: Created slice kubepods-besteffort-pod17985068_d78a_4b83_a3e4_03169b35b936.slice - libcontainer container kubepods-besteffort-pod17985068_d78a_4b83_a3e4_03169b35b936.slice. Jan 24 00:30:29.671707 systemd[1]: Created slice kubepods-burstable-pod5f9c89a8_9a30_4332_8094_e2b372cfff86.slice - libcontainer container kubepods-burstable-pod5f9c89a8_9a30_4332_8094_e2b372cfff86.slice. Jan 24 00:30:29.690991 systemd[1]: Created slice kubepods-besteffort-podbba2b9d8_6e2d_4162_96fa_f25acd35c593.slice - libcontainer container kubepods-besteffort-podbba2b9d8_6e2d_4162_96fa_f25acd35c593.slice. Jan 24 00:30:29.709932 systemd[1]: Created slice kubepods-besteffort-pod9dbd8ba0_6eab_49ff_9cd2_d46a7d492e06.slice - libcontainer container kubepods-besteffort-pod9dbd8ba0_6eab_49ff_9cd2_d46a7d492e06.slice. Jan 24 00:30:29.724432 systemd[1]: Created slice kubepods-burstable-poddb37e332_c065_4a9f_995f_7513b669f795.slice - libcontainer container kubepods-burstable-poddb37e332_c065_4a9f_995f_7513b669f795.slice. Jan 24 00:30:29.735566 systemd[1]: Created slice kubepods-besteffort-pod450f5953_3642_4725_b413_4ccd0b446f9a.slice - libcontainer container kubepods-besteffort-pod450f5953_3642_4725_b413_4ccd0b446f9a.slice. Jan 24 00:30:29.737169 kubelet[2559]: I0124 00:30:29.737030 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqbm\" (UniqueName: \"kubernetes.io/projected/17985068-d78a-4b83-a3e4-03169b35b936-kube-api-access-hzqbm\") pod \"whisker-66bb8788df-jbgmv\" (UID: \"17985068-d78a-4b83-a3e4-03169b35b936\") " pod="calico-system/whisker-66bb8788df-jbgmv" Jan 24 00:30:29.737169 kubelet[2559]: I0124 00:30:29.737083 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17985068-d78a-4b83-a3e4-03169b35b936-whisker-ca-bundle\") pod \"whisker-66bb8788df-jbgmv\" (UID: \"17985068-d78a-4b83-a3e4-03169b35b936\") " pod="calico-system/whisker-66bb8788df-jbgmv" Jan 24 00:30:29.737169 kubelet[2559]: I0124 00:30:29.737119 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17985068-d78a-4b83-a3e4-03169b35b936-whisker-backend-key-pair\") pod \"whisker-66bb8788df-jbgmv\" (UID: \"17985068-d78a-4b83-a3e4-03169b35b936\") " pod="calico-system/whisker-66bb8788df-jbgmv" Jan 24 00:30:29.742398 systemd[1]: Created slice kubepods-besteffort-pod231ec5ca_b889_4270_8427_6227d170b1c8.slice - libcontainer container kubepods-besteffort-pod231ec5ca_b889_4270_8427_6227d170b1c8.slice. Jan 24 00:30:29.783640 kubelet[2559]: E0124 00:30:29.783435 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:29.785167 containerd[1464]: time="2026-01-24T00:30:29.785086376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:30:29.839146 kubelet[2559]: I0124 00:30:29.838968 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xsh9r\" (UID: \"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06\") " pod="calico-system/goldmane-7c778bb748-xsh9r" Jan 24 00:30:29.839146 kubelet[2559]: I0124 00:30:29.839031 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06-goldmane-key-pair\") pod \"goldmane-7c778bb748-xsh9r\" (UID: \"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06\") " pod="calico-system/goldmane-7c778bb748-xsh9r" Jan 24 00:30:29.839852 kubelet[2559]: I0124 00:30:29.839229 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/231ec5ca-b889-4270-8427-6227d170b1c8-calico-apiserver-certs\") pod \"calico-apiserver-6896bc5cbd-25rx6\" (UID: \"231ec5ca-b889-4270-8427-6227d170b1c8\") " pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" Jan 24 00:30:29.839852 kubelet[2559]: I0124 00:30:29.839260 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4lxs\" (UniqueName: \"kubernetes.io/projected/9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06-kube-api-access-q4lxs\") pod \"goldmane-7c778bb748-xsh9r\" (UID: \"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06\") " pod="calico-system/goldmane-7c778bb748-xsh9r" Jan 24 00:30:29.839852 kubelet[2559]: I0124 00:30:29.839283 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/450f5953-3642-4725-b413-4ccd0b446f9a-calico-apiserver-certs\") pod \"calico-apiserver-6896bc5cbd-7zn99\" (UID: \"450f5953-3642-4725-b413-4ccd0b446f9a\") " pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" Jan 24 00:30:29.839852 kubelet[2559]: I0124 00:30:29.839309 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br5qf\" (UniqueName: \"kubernetes.io/projected/5f9c89a8-9a30-4332-8094-e2b372cfff86-kube-api-access-br5qf\") pod \"coredns-66bc5c9577-tn4tb\" (UID: \"5f9c89a8-9a30-4332-8094-e2b372cfff86\") " pod="kube-system/coredns-66bc5c9577-tn4tb" Jan 24 00:30:29.839852 kubelet[2559]: I0124 00:30:29.839357 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba2b9d8-6e2d-4162-96fa-f25acd35c593-tigera-ca-bundle\") pod \"calico-kube-controllers-dc5d5c67c-njvsm\" (UID: \"bba2b9d8-6e2d-4162-96fa-f25acd35c593\") " pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" Jan 24 00:30:29.840089 kubelet[2559]: I0124 00:30:29.839861 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpn4x\" (UniqueName: \"kubernetes.io/projected/bba2b9d8-6e2d-4162-96fa-f25acd35c593-kube-api-access-qpn4x\") pod \"calico-kube-controllers-dc5d5c67c-njvsm\" (UID: \"bba2b9d8-6e2d-4162-96fa-f25acd35c593\") " pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" Jan 24 00:30:29.840089 kubelet[2559]: I0124 00:30:29.839888 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06-config\") pod \"goldmane-7c778bb748-xsh9r\" (UID: \"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06\") " pod="calico-system/goldmane-7c778bb748-xsh9r" Jan 24 00:30:29.840089 kubelet[2559]: I0124 00:30:29.839945 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpmf\" (UniqueName: \"kubernetes.io/projected/231ec5ca-b889-4270-8427-6227d170b1c8-kube-api-access-5bpmf\") pod \"calico-apiserver-6896bc5cbd-25rx6\" (UID: \"231ec5ca-b889-4270-8427-6227d170b1c8\") " pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" Jan 24 00:30:29.840089 kubelet[2559]: I0124 00:30:29.839967 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vhq\" (UniqueName: \"kubernetes.io/projected/db37e332-c065-4a9f-995f-7513b669f795-kube-api-access-j9vhq\") pod \"coredns-66bc5c9577-ppq4k\" (UID: \"db37e332-c065-4a9f-995f-7513b669f795\") " pod="kube-system/coredns-66bc5c9577-ppq4k" Jan 24 00:30:29.840089 kubelet[2559]: I0124 00:30:29.839990 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db37e332-c065-4a9f-995f-7513b669f795-config-volume\") pod \"coredns-66bc5c9577-ppq4k\" (UID: \"db37e332-c065-4a9f-995f-7513b669f795\") " pod="kube-system/coredns-66bc5c9577-ppq4k" Jan 24 00:30:29.840276 kubelet[2559]: I0124 00:30:29.840010 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f9c89a8-9a30-4332-8094-e2b372cfff86-config-volume\") pod \"coredns-66bc5c9577-tn4tb\" (UID: \"5f9c89a8-9a30-4332-8094-e2b372cfff86\") " pod="kube-system/coredns-66bc5c9577-tn4tb" Jan 24 00:30:29.840276 kubelet[2559]: I0124 00:30:29.840034 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxjm6\" (UniqueName: \"kubernetes.io/projected/450f5953-3642-4725-b413-4ccd0b446f9a-kube-api-access-zxjm6\") pod \"calico-apiserver-6896bc5cbd-7zn99\" (UID: \"450f5953-3642-4725-b413-4ccd0b446f9a\") " pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" Jan 24 00:30:29.975535 containerd[1464]: time="2026-01-24T00:30:29.974910341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66bb8788df-jbgmv,Uid:17985068-d78a-4b83-a3e4-03169b35b936,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:30.004982 containerd[1464]: time="2026-01-24T00:30:30.004751474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc5d5c67c-njvsm,Uid:bba2b9d8-6e2d-4162-96fa-f25acd35c593,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:30.025273 containerd[1464]: time="2026-01-24T00:30:30.024895978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xsh9r,Uid:9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:30.038722 kubelet[2559]: E0124 00:30:30.038677 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:30.042798 containerd[1464]: time="2026-01-24T00:30:30.040286566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ppq4k,Uid:db37e332-c065-4a9f-995f-7513b669f795,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:30.045789 containerd[1464]: time="2026-01-24T00:30:30.045564389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-7zn99,Uid:450f5953-3642-4725-b413-4ccd0b446f9a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:30:30.050861 containerd[1464]: time="2026-01-24T00:30:30.050813422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-25rx6,Uid:231ec5ca-b889-4270-8427-6227d170b1c8,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:30:30.286439 kubelet[2559]: E0124 00:30:30.286298 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:30.289565 containerd[1464]: time="2026-01-24T00:30:30.288790008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4tb,Uid:5f9c89a8-9a30-4332-8094-e2b372cfff86,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:30.295241 containerd[1464]: time="2026-01-24T00:30:30.294817295Z" level=error msg="Failed to destroy network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.302458 containerd[1464]: time="2026-01-24T00:30:30.302257052Z" level=error msg="encountered an error cleaning up failed sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.302962 containerd[1464]: time="2026-01-24T00:30:30.302734709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc5d5c67c-njvsm,Uid:bba2b9d8-6e2d-4162-96fa-f25acd35c593,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.316813 containerd[1464]: time="2026-01-24T00:30:30.316673148Z" level=error msg="Failed to destroy network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.318696 containerd[1464]: time="2026-01-24T00:30:30.318542286Z" level=error msg="encountered an error cleaning up failed sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.319293 containerd[1464]: time="2026-01-24T00:30:30.319218144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66bb8788df-jbgmv,Uid:17985068-d78a-4b83-a3e4-03169b35b936,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.325704 kubelet[2559]: E0124 00:30:30.324787 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.325704 kubelet[2559]: E0124 00:30:30.324862 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66bb8788df-jbgmv" Jan 24 00:30:30.325704 kubelet[2559]: E0124 00:30:30.325115 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.325704 kubelet[2559]: E0124 00:30:30.325148 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" Jan 24 00:30:30.326533 kubelet[2559]: E0124 00:30:30.325206 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" Jan 24 00:30:30.326533 kubelet[2559]: E0124 00:30:30.325271 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dc5d5c67c-njvsm_calico-system(bba2b9d8-6e2d-4162-96fa-f25acd35c593)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dc5d5c67c-njvsm_calico-system(bba2b9d8-6e2d-4162-96fa-f25acd35c593)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:30:30.329422 kubelet[2559]: E0124 00:30:30.327293 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66bb8788df-jbgmv" Jan 24 00:30:30.329422 kubelet[2559]: E0124 00:30:30.327375 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66bb8788df-jbgmv_calico-system(17985068-d78a-4b83-a3e4-03169b35b936)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66bb8788df-jbgmv_calico-system(17985068-d78a-4b83-a3e4-03169b35b936)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66bb8788df-jbgmv" podUID="17985068-d78a-4b83-a3e4-03169b35b936" Jan 24 00:30:30.385949 containerd[1464]: time="2026-01-24T00:30:30.385893909Z" level=error msg="Failed to destroy network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.386881 containerd[1464]: time="2026-01-24T00:30:30.386848320Z" level=error msg="encountered an error cleaning up failed sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.387016 containerd[1464]: time="2026-01-24T00:30:30.386989817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xsh9r,Uid:9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.390288 kubelet[2559]: E0124 00:30:30.387807 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.390288 kubelet[2559]: E0124 00:30:30.387871 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xsh9r" Jan 24 00:30:30.390288 kubelet[2559]: E0124 00:30:30.387894 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xsh9r" Jan 24 00:30:30.390692 kubelet[2559]: E0124 00:30:30.387954 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xsh9r_calico-system(9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xsh9r_calico-system(9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:30:30.392243 containerd[1464]: time="2026-01-24T00:30:30.392170532Z" level=error msg="Failed to destroy network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.393074 containerd[1464]: time="2026-01-24T00:30:30.392964372Z" level=error msg="encountered an error cleaning up failed sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.393074 containerd[1464]: time="2026-01-24T00:30:30.393024906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-25rx6,Uid:231ec5ca-b889-4270-8427-6227d170b1c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.394030 kubelet[2559]: E0124 00:30:30.393565 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.394662 kubelet[2559]: E0124 00:30:30.394417 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" Jan 24 00:30:30.394662 kubelet[2559]: E0124 00:30:30.394447 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" Jan 24 00:30:30.394662 kubelet[2559]: E0124 00:30:30.394569 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6896bc5cbd-25rx6_calico-apiserver(231ec5ca-b889-4270-8427-6227d170b1c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6896bc5cbd-25rx6_calico-apiserver(231ec5ca-b889-4270-8427-6227d170b1c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:30:30.405064 containerd[1464]: time="2026-01-24T00:30:30.404984619Z" level=error msg="Failed to destroy network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.405710 containerd[1464]: time="2026-01-24T00:30:30.405651482Z" level=error msg="encountered an error cleaning up failed sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.405764 containerd[1464]: time="2026-01-24T00:30:30.405722795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-7zn99,Uid:450f5953-3642-4725-b413-4ccd0b446f9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.406040 kubelet[2559]: E0124 00:30:30.405989 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.406101 kubelet[2559]: E0124 00:30:30.406067 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" Jan 24 00:30:30.406101 kubelet[2559]: E0124 00:30:30.406092 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" Jan 24 00:30:30.406219 kubelet[2559]: E0124 00:30:30.406172 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6896bc5cbd-7zn99_calico-apiserver(450f5953-3642-4725-b413-4ccd0b446f9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6896bc5cbd-7zn99_calico-apiserver(450f5953-3642-4725-b413-4ccd0b446f9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:30:30.434421 containerd[1464]: time="2026-01-24T00:30:30.434372497Z" level=error msg="Failed to destroy network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.435321 containerd[1464]: time="2026-01-24T00:30:30.435188268Z" level=error msg="encountered an error cleaning up failed sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.435321 containerd[1464]: time="2026-01-24T00:30:30.435273648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ppq4k,Uid:db37e332-c065-4a9f-995f-7513b669f795,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.435804 kubelet[2559]: E0124 00:30:30.435681 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.435804 kubelet[2559]: E0124 00:30:30.435765 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ppq4k" Jan 24 00:30:30.435901 kubelet[2559]: E0124 00:30:30.435792 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ppq4k" Jan 24 00:30:30.435927 kubelet[2559]: E0124 00:30:30.435893 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ppq4k_kube-system(db37e332-c065-4a9f-995f-7513b669f795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ppq4k_kube-system(db37e332-c065-4a9f-995f-7513b669f795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ppq4k" podUID="db37e332-c065-4a9f-995f-7513b669f795" Jan 24 00:30:30.470160 containerd[1464]: time="2026-01-24T00:30:30.470066441Z" level=error msg="Failed to destroy network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.470681 containerd[1464]: time="2026-01-24T00:30:30.470640819Z" level=error msg="encountered an error cleaning up failed sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.470760 containerd[1464]: time="2026-01-24T00:30:30.470705000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4tb,Uid:5f9c89a8-9a30-4332-8094-e2b372cfff86,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.471054 kubelet[2559]: E0124 00:30:30.471020 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.471186 kubelet[2559]: E0124 00:30:30.471148 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tn4tb" Jan 24 00:30:30.471186 kubelet[2559]: E0124 00:30:30.471178 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tn4tb" Jan 24 00:30:30.471274 kubelet[2559]: E0124 00:30:30.471254 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tn4tb_kube-system(5f9c89a8-9a30-4332-8094-e2b372cfff86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tn4tb_kube-system(5f9c89a8-9a30-4332-8094-e2b372cfff86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tn4tb" podUID="5f9c89a8-9a30-4332-8094-e2b372cfff86" Jan 24 00:30:30.594936 systemd[1]: Created slice kubepods-besteffort-pod40661dc3_d91f_42a2_a397_77dbe1e37cee.slice - libcontainer container kubepods-besteffort-pod40661dc3_d91f_42a2_a397_77dbe1e37cee.slice. Jan 24 00:30:30.611124 containerd[1464]: time="2026-01-24T00:30:30.611050954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frc8p,Uid:40661dc3-d91f-42a2-a397-77dbe1e37cee,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:30.740873 containerd[1464]: time="2026-01-24T00:30:30.740783909Z" level=error msg="Failed to destroy network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.741705 containerd[1464]: time="2026-01-24T00:30:30.741570675Z" level=error msg="encountered an error cleaning up failed sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.741785 containerd[1464]: time="2026-01-24T00:30:30.741745954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frc8p,Uid:40661dc3-d91f-42a2-a397-77dbe1e37cee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.745357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b-shm.mount: Deactivated successfully. Jan 24 00:30:30.765811 kubelet[2559]: E0124 00:30:30.765544 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.765811 kubelet[2559]: E0124 00:30:30.765737 2559 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frc8p" Jan 24 00:30:30.765811 kubelet[2559]: E0124 00:30:30.765758 2559 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frc8p" Jan 24 00:30:30.766424 kubelet[2559]: E0124 00:30:30.765823 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:30.797369 kubelet[2559]: I0124 00:30:30.797238 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:30:30.800298 kubelet[2559]: I0124 00:30:30.800249 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:30:30.803523 kubelet[2559]: I0124 00:30:30.803152 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:30:30.807202 kubelet[2559]: I0124 00:30:30.806434 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:30:30.810222 containerd[1464]: time="2026-01-24T00:30:30.808897839Z" level=info msg="StopPodSandbox for \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\"" Jan 24 00:30:30.810222 containerd[1464]: time="2026-01-24T00:30:30.809054320Z" level=info msg="StopPodSandbox for \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\"" Jan 24 00:30:30.810222 containerd[1464]: time="2026-01-24T00:30:30.808897446Z" level=info msg="StopPodSandbox for \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\"" Jan 24 00:30:30.810396 kubelet[2559]: I0124 00:30:30.810126 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:30:30.810718 containerd[1464]: time="2026-01-24T00:30:30.810673210Z" level=info msg="StopPodSandbox for \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\"" Jan 24 00:30:30.812332 containerd[1464]: time="2026-01-24T00:30:30.812252126Z" level=info msg="Ensure that sandbox 0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647 in task-service has been cleanup successfully" Jan 24 00:30:30.812332 containerd[1464]: time="2026-01-24T00:30:30.812316274Z" level=info msg="Ensure that sandbox 55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b in task-service has been cleanup successfully" Jan 24 00:30:30.812645 containerd[1464]: time="2026-01-24T00:30:30.812465083Z" level=info msg="Ensure that sandbox bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566 in task-service has been cleanup successfully" Jan 24 00:30:30.812780 containerd[1464]: time="2026-01-24T00:30:30.812750039Z" level=info msg="Ensure that sandbox 030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77 in task-service has been cleanup successfully" Jan 24 00:30:30.821160 containerd[1464]: time="2026-01-24T00:30:30.821112236Z" level=info msg="StopPodSandbox for \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\"" Jan 24 00:30:30.832721 kubelet[2559]: I0124 00:30:30.831790 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:30:30.834320 containerd[1464]: time="2026-01-24T00:30:30.834275019Z" level=info msg="Ensure that sandbox a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83 in task-service has been cleanup successfully" Jan 24 00:30:30.839857 containerd[1464]: time="2026-01-24T00:30:30.839771064Z" level=info msg="StopPodSandbox for \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\"" Jan 24 00:30:30.840158 containerd[1464]: time="2026-01-24T00:30:30.840019611Z" level=info msg="Ensure that sandbox d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c in task-service has been cleanup successfully" Jan 24 00:30:30.841908 containerd[1464]: time="2026-01-24T00:30:30.841746178Z" level=info msg="StopPodSandbox for \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\"" Jan 24 00:30:30.842868 containerd[1464]: time="2026-01-24T00:30:30.842676866Z" level=info msg="Ensure that sandbox 7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73 in task-service has been cleanup successfully" Jan 24 00:30:30.842935 kubelet[2559]: I0124 00:30:30.840728 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:30:30.849977 kubelet[2559]: I0124 00:30:30.849679 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:30:30.857815 containerd[1464]: time="2026-01-24T00:30:30.857700950Z" level=info msg="StopPodSandbox for \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\"" Jan 24 00:30:30.863264 containerd[1464]: time="2026-01-24T00:30:30.863100777Z" level=info msg="Ensure that sandbox 01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f in task-service has been cleanup successfully" Jan 24 00:30:30.968674 containerd[1464]: time="2026-01-24T00:30:30.968360072Z" level=error msg="StopPodSandbox for \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\" failed" error="failed to destroy network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.968934 kubelet[2559]: E0124 00:30:30.968890 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:30:30.969043 kubelet[2559]: E0124 00:30:30.968960 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b"} Jan 24 00:30:30.969043 kubelet[2559]: E0124 00:30:30.969023 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40661dc3-d91f-42a2-a397-77dbe1e37cee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:30.969211 kubelet[2559]: E0124 00:30:30.969059 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40661dc3-d91f-42a2-a397-77dbe1e37cee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:30.983570 containerd[1464]: time="2026-01-24T00:30:30.983323381Z" level=error msg="StopPodSandbox for \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\" failed" error="failed to destroy network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.984339 kubelet[2559]: E0124 00:30:30.984281 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:30:30.984904 kubelet[2559]: E0124 00:30:30.984765 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77"} Jan 24 00:30:30.984904 kubelet[2559]: E0124 00:30:30.984826 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"450f5953-3642-4725-b413-4ccd0b446f9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:30.984904 kubelet[2559]: E0124 00:30:30.984866 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"450f5953-3642-4725-b413-4ccd0b446f9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:30:30.991725 containerd[1464]: time="2026-01-24T00:30:30.991539334Z" level=error msg="StopPodSandbox for \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\" failed" error="failed to destroy network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.992354 kubelet[2559]: E0124 00:30:30.992156 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:30:30.992354 kubelet[2559]: E0124 00:30:30.992225 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c"} Jan 24 00:30:30.992354 kubelet[2559]: E0124 00:30:30.992272 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:30.992354 kubelet[2559]: E0124 00:30:30.992309 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:30:30.994360 containerd[1464]: time="2026-01-24T00:30:30.993962113Z" level=error msg="StopPodSandbox for \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\" failed" error="failed to destroy network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.994565 containerd[1464]: time="2026-01-24T00:30:30.994401831Z" level=error msg="StopPodSandbox for \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\" failed" error="failed to destroy network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:30.995341 kubelet[2559]: E0124 00:30:30.995041 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:30:30.995341 kubelet[2559]: E0124 00:30:30.995086 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73"} Jan 24 00:30:30.995341 kubelet[2559]: E0124 00:30:30.995119 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bba2b9d8-6e2d-4162-96fa-f25acd35c593\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:30.995341 kubelet[2559]: E0124 00:30:30.995152 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bba2b9d8-6e2d-4162-96fa-f25acd35c593\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:30:30.995814 kubelet[2559]: E0124 00:30:30.995195 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:30:30.995814 kubelet[2559]: E0124 00:30:30.995219 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647"} Jan 24 00:30:30.995814 kubelet[2559]: E0124 00:30:30.995266 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17985068-d78a-4b83-a3e4-03169b35b936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:30.995814 kubelet[2559]: E0124 00:30:30.995296 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17985068-d78a-4b83-a3e4-03169b35b936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66bb8788df-jbgmv" podUID="17985068-d78a-4b83-a3e4-03169b35b936" Jan 24 00:30:31.003933 containerd[1464]: time="2026-01-24T00:30:31.003683531Z" level=error msg="StopPodSandbox for \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\" failed" error="failed to destroy network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:31.004352 kubelet[2559]: E0124 00:30:31.004145 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:30:31.004352 kubelet[2559]: E0124 00:30:31.004216 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566"} Jan 24 00:30:31.004352 kubelet[2559]: E0124 00:30:31.004304 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f9c89a8-9a30-4332-8094-e2b372cfff86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:31.004742 kubelet[2559]: E0124 00:30:31.004349 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f9c89a8-9a30-4332-8094-e2b372cfff86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tn4tb" podUID="5f9c89a8-9a30-4332-8094-e2b372cfff86" Jan 24 00:30:31.029983 containerd[1464]: time="2026-01-24T00:30:31.029417177Z" level=error msg="StopPodSandbox for \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\" failed" error="failed to destroy network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:31.031257 kubelet[2559]: E0124 00:30:31.030686 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:30:31.031919 kubelet[2559]: E0124 00:30:31.031391 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f"} Jan 24 00:30:31.031919 kubelet[2559]: E0124 00:30:31.031437 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db37e332-c065-4a9f-995f-7513b669f795\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:31.031919 kubelet[2559]: E0124 00:30:31.031871 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db37e332-c065-4a9f-995f-7513b669f795\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ppq4k" podUID="db37e332-c065-4a9f-995f-7513b669f795" Jan 24 00:30:31.035735 containerd[1464]: time="2026-01-24T00:30:31.035528760Z" level=error msg="StopPodSandbox for \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\" failed" error="failed to destroy network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:31.036286 kubelet[2559]: E0124 00:30:31.036180 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:30:31.036286 kubelet[2559]: E0124 00:30:31.036276 2559 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83"} Jan 24 00:30:31.036392 kubelet[2559]: E0124 00:30:31.036317 2559 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"231ec5ca-b889-4270-8427-6227d170b1c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:31.036392 kubelet[2559]: E0124 00:30:31.036347 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"231ec5ca-b889-4270-8427-6227d170b1c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:30:37.063305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341208010.mount: Deactivated successfully. Jan 24 00:30:37.385908 containerd[1464]: time="2026-01-24T00:30:37.385542050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:37.388784 containerd[1464]: time="2026-01-24T00:30:37.387890156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:30:37.391248 containerd[1464]: time="2026-01-24T00:30:37.391167286Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:37.393786 containerd[1464]: time="2026-01-24T00:30:37.393717079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:37.394400 containerd[1464]: time="2026-01-24T00:30:37.394289635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.609137455s" Jan 24 00:30:37.394400 containerd[1464]: time="2026-01-24T00:30:37.394345571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:30:37.414957 containerd[1464]: time="2026-01-24T00:30:37.414885182Z" level=info msg="CreateContainer within sandbox \"dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:30:37.459368 containerd[1464]: time="2026-01-24T00:30:37.459196088Z" level=info msg="CreateContainer within sandbox \"dc911d2fdd82a85364611e23c2731eb335fdf515d0260550bca02f7819c05520\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f59f63ff9eef3742a5ee467649d29250afba85e615d9cd0db0d85e4622cce342\"" Jan 24 00:30:37.461111 containerd[1464]: time="2026-01-24T00:30:37.461040691Z" level=info msg="StartContainer for \"f59f63ff9eef3742a5ee467649d29250afba85e615d9cd0db0d85e4622cce342\"" Jan 24 00:30:37.565968 systemd[1]: Started cri-containerd-f59f63ff9eef3742a5ee467649d29250afba85e615d9cd0db0d85e4622cce342.scope - libcontainer container f59f63ff9eef3742a5ee467649d29250afba85e615d9cd0db0d85e4622cce342. Jan 24 00:30:37.621080 containerd[1464]: time="2026-01-24T00:30:37.621011270Z" level=info msg="StartContainer for \"f59f63ff9eef3742a5ee467649d29250afba85e615d9cd0db0d85e4622cce342\" returns successfully" Jan 24 00:30:37.860517 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:30:37.860787 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:30:37.886672 kubelet[2559]: E0124 00:30:37.886279 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:37.935439 kubelet[2559]: I0124 00:30:37.932219 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m8w55" podStartSLOduration=1.413447678 podStartE2EDuration="15.932200593s" podCreationTimestamp="2026-01-24 00:30:22 +0000 UTC" firstStartedPulling="2026-01-24 00:30:22.876808968 +0000 UTC m=+20.439376234" lastFinishedPulling="2026-01-24 00:30:37.395561883 +0000 UTC m=+34.958129149" observedRunningTime="2026-01-24 00:30:37.930260345 +0000 UTC m=+35.492827612" watchObservedRunningTime="2026-01-24 00:30:37.932200593 +0000 UTC m=+35.494767860" Jan 24 00:30:38.148309 containerd[1464]: time="2026-01-24T00:30:38.147521599Z" level=info msg="StopPodSandbox for \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\"" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.334 [INFO][3868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.334 [INFO][3868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" iface="eth0" netns="/var/run/netns/cni-bfafb384-c4ae-d127-0b31-00a3f199c9cd" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.335 [INFO][3868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" iface="eth0" netns="/var/run/netns/cni-bfafb384-c4ae-d127-0b31-00a3f199c9cd" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.337 [INFO][3868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" iface="eth0" netns="/var/run/netns/cni-bfafb384-c4ae-d127-0b31-00a3f199c9cd" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.337 [INFO][3868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.337 [INFO][3868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.506 [INFO][3884] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.507 [INFO][3884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.508 [INFO][3884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.519 [WARNING][3884] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.520 [INFO][3884] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.522 [INFO][3884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:38.528742 containerd[1464]: 2026-01-24 00:30:38.525 [INFO][3868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:30:38.529685 containerd[1464]: time="2026-01-24T00:30:38.528941734Z" level=info msg="TearDown network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\" successfully" Jan 24 00:30:38.529685 containerd[1464]: time="2026-01-24T00:30:38.528977262Z" level=info msg="StopPodSandbox for \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\" returns successfully" Jan 24 00:30:38.532458 systemd[1]: run-netns-cni\x2dbfafb384\x2dc4ae\x2dd127\x2d0b31\x2d00a3f199c9cd.mount: Deactivated successfully. Jan 24 00:30:38.629258 kubelet[2559]: I0124 00:30:38.629187 2559 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17985068-d78a-4b83-a3e4-03169b35b936-whisker-backend-key-pair\") pod \"17985068-d78a-4b83-a3e4-03169b35b936\" (UID: \"17985068-d78a-4b83-a3e4-03169b35b936\") " Jan 24 00:30:38.629258 kubelet[2559]: I0124 00:30:38.629250 2559 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17985068-d78a-4b83-a3e4-03169b35b936-whisker-ca-bundle\") pod \"17985068-d78a-4b83-a3e4-03169b35b936\" (UID: \"17985068-d78a-4b83-a3e4-03169b35b936\") " Jan 24 00:30:38.629420 kubelet[2559]: I0124 00:30:38.629272 2559 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzqbm\" (UniqueName: \"kubernetes.io/projected/17985068-d78a-4b83-a3e4-03169b35b936-kube-api-access-hzqbm\") pod \"17985068-d78a-4b83-a3e4-03169b35b936\" (UID: \"17985068-d78a-4b83-a3e4-03169b35b936\") " Jan 24 00:30:38.632204 kubelet[2559]: I0124 00:30:38.631195 2559 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17985068-d78a-4b83-a3e4-03169b35b936-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "17985068-d78a-4b83-a3e4-03169b35b936" (UID: "17985068-d78a-4b83-a3e4-03169b35b936"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:30:38.633965 kubelet[2559]: I0124 00:30:38.633903 2559 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17985068-d78a-4b83-a3e4-03169b35b936-kube-api-access-hzqbm" (OuterVolumeSpecName: "kube-api-access-hzqbm") pod "17985068-d78a-4b83-a3e4-03169b35b936" (UID: "17985068-d78a-4b83-a3e4-03169b35b936"). InnerVolumeSpecName "kube-api-access-hzqbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:30:38.635507 kubelet[2559]: I0124 00:30:38.635456 2559 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17985068-d78a-4b83-a3e4-03169b35b936-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "17985068-d78a-4b83-a3e4-03169b35b936" (UID: "17985068-d78a-4b83-a3e4-03169b35b936"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:30:38.638105 systemd[1]: var-lib-kubelet-pods-17985068\x2dd78a\x2d4b83\x2da3e4\x2d03169b35b936-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhzqbm.mount: Deactivated successfully. Jan 24 00:30:38.638275 systemd[1]: var-lib-kubelet-pods-17985068\x2dd78a\x2d4b83\x2da3e4\x2d03169b35b936-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:30:38.730276 kubelet[2559]: I0124 00:30:38.730143 2559 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hzqbm\" (UniqueName: \"kubernetes.io/projected/17985068-d78a-4b83-a3e4-03169b35b936-kube-api-access-hzqbm\") on node \"localhost\" DevicePath \"\"" Jan 24 00:30:38.730276 kubelet[2559]: I0124 00:30:38.730203 2559 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17985068-d78a-4b83-a3e4-03169b35b936-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:30:38.730276 kubelet[2559]: I0124 00:30:38.730215 2559 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17985068-d78a-4b83-a3e4-03169b35b936-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:30:38.888669 kubelet[2559]: E0124 00:30:38.886921 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:38.893426 systemd[1]: Removed slice kubepods-besteffort-pod17985068_d78a_4b83_a3e4_03169b35b936.slice - libcontainer container kubepods-besteffort-pod17985068_d78a_4b83_a3e4_03169b35b936.slice. Jan 24 00:30:39.005145 systemd[1]: Created slice kubepods-besteffort-podc7a745d0_7ac5_41d7_b9e1_a1e923945f5c.slice - libcontainer container kubepods-besteffort-podc7a745d0_7ac5_41d7_b9e1_a1e923945f5c.slice. Jan 24 00:30:39.033364 kubelet[2559]: I0124 00:30:39.033276 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7a745d0-7ac5-41d7-b9e1-a1e923945f5c-whisker-backend-key-pair\") pod \"whisker-558b4bdd9c-hsrgc\" (UID: \"c7a745d0-7ac5-41d7-b9e1-a1e923945f5c\") " pod="calico-system/whisker-558b4bdd9c-hsrgc" Jan 24 00:30:39.033364 kubelet[2559]: I0124 00:30:39.033350 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7a745d0-7ac5-41d7-b9e1-a1e923945f5c-whisker-ca-bundle\") pod \"whisker-558b4bdd9c-hsrgc\" (UID: \"c7a745d0-7ac5-41d7-b9e1-a1e923945f5c\") " pod="calico-system/whisker-558b4bdd9c-hsrgc" Jan 24 00:30:39.033364 kubelet[2559]: I0124 00:30:39.033376 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kk4\" (UniqueName: \"kubernetes.io/projected/c7a745d0-7ac5-41d7-b9e1-a1e923945f5c-kube-api-access-27kk4\") pod \"whisker-558b4bdd9c-hsrgc\" (UID: \"c7a745d0-7ac5-41d7-b9e1-a1e923945f5c\") " pod="calico-system/whisker-558b4bdd9c-hsrgc" Jan 24 00:30:39.316914 containerd[1464]: time="2026-01-24T00:30:39.316777209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b4bdd9c-hsrgc,Uid:c7a745d0-7ac5-41d7-b9e1-a1e923945f5c,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:39.496910 systemd-networkd[1390]: cali45fd29d4eb5: Link UP Jan 24 00:30:39.497227 systemd-networkd[1390]: cali45fd29d4eb5: Gained carrier Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.369 [INFO][3931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.386 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0 whisker-558b4bdd9c- calico-system c7a745d0-7ac5-41d7-b9e1-a1e923945f5c 912 0 2026-01-24 00:30:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:558b4bdd9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-558b4bdd9c-hsrgc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali45fd29d4eb5 [] [] }} ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.387 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.433 [INFO][3946] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" HandleID="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Workload="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.434 [INFO][3946] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" HandleID="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Workload="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-558b4bdd9c-hsrgc", "timestamp":"2026-01-24 00:30:39.43397873 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.434 [INFO][3946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.434 [INFO][3946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.434 [INFO][3946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.445 [INFO][3946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.453 [INFO][3946] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.460 [INFO][3946] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.463 [INFO][3946] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.467 [INFO][3946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.467 [INFO][3946] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.470 [INFO][3946] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30 Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.474 [INFO][3946] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.481 [INFO][3946] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.481 [INFO][3946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" host="localhost" Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.481 [INFO][3946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:39.514846 containerd[1464]: 2026-01-24 00:30:39.481 [INFO][3946] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" HandleID="k8s-pod-network.70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Workload="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" Jan 24 00:30:39.516000 containerd[1464]: 2026-01-24 00:30:39.485 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0", GenerateName:"whisker-558b4bdd9c-", Namespace:"calico-system", SelfLink:"", UID:"c7a745d0-7ac5-41d7-b9e1-a1e923945f5c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"558b4bdd9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-558b4bdd9c-hsrgc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali45fd29d4eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:39.516000 containerd[1464]: 2026-01-24 00:30:39.485 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" Jan 24 00:30:39.516000 containerd[1464]: 2026-01-24 00:30:39.485 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45fd29d4eb5 ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" Jan 24 00:30:39.516000 containerd[1464]: 2026-01-24 00:30:39.497 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" Jan 24 00:30:39.516000 containerd[1464]: 2026-01-24 00:30:39.497 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0", GenerateName:"whisker-558b4bdd9c-", Namespace:"calico-system", SelfLink:"", UID:"c7a745d0-7ac5-41d7-b9e1-a1e923945f5c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"558b4bdd9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30", Pod:"whisker-558b4bdd9c-hsrgc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali45fd29d4eb5", MAC:"46:a6:7e:89:ba:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:39.516000 containerd[1464]: 2026-01-24 00:30:39.510 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30" Namespace="calico-system" Pod="whisker-558b4bdd9c-hsrgc" WorkloadEndpoint="localhost-k8s-whisker--558b4bdd9c--hsrgc-eth0" Jan 24 00:30:39.556700 containerd[1464]: time="2026-01-24T00:30:39.554701780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:39.556700 containerd[1464]: time="2026-01-24T00:30:39.556502439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:39.556700 containerd[1464]: time="2026-01-24T00:30:39.556525904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:39.557780 containerd[1464]: time="2026-01-24T00:30:39.556883584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:39.594038 systemd[1]: Started cri-containerd-70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30.scope - libcontainer container 70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30. Jan 24 00:30:39.610723 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:39.652451 containerd[1464]: time="2026-01-24T00:30:39.652399284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b4bdd9c-hsrgc,Uid:c7a745d0-7ac5-41d7-b9e1-a1e923945f5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"70466442ac3ac828ee21ac20d2705c64f280693b5fac9794d4a5bfd02f99ce30\"" Jan 24 00:30:39.659333 containerd[1464]: time="2026-01-24T00:30:39.659011430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:30:39.730088 containerd[1464]: time="2026-01-24T00:30:39.730002153Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:39.779102 containerd[1464]: time="2026-01-24T00:30:39.760870581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:30:39.779278 containerd[1464]: time="2026-01-24T00:30:39.761802140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:30:39.780507 kubelet[2559]: E0124 00:30:39.779550 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:39.780507 kubelet[2559]: E0124 00:30:39.779716 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:39.780507 kubelet[2559]: E0124 00:30:39.779885 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:39.784445 containerd[1464]: time="2026-01-24T00:30:39.784330300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:30:39.854741 containerd[1464]: time="2026-01-24T00:30:39.854523116Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:39.856859 containerd[1464]: time="2026-01-24T00:30:39.856625551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:30:39.856859 containerd[1464]: time="2026-01-24T00:30:39.856745946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:30:39.857691 kubelet[2559]: E0124 00:30:39.857158 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:39.857691 kubelet[2559]: E0124 00:30:39.857202 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:39.857691 kubelet[2559]: E0124 00:30:39.857272 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:39.858339 kubelet[2559]: E0124 00:30:39.857310 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:30:39.892614 kubelet[2559]: E0124 00:30:39.892379 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:30:40.033811 kubelet[2559]: I0124 00:30:40.033738 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:30:40.034195 kubelet[2559]: E0124 00:30:40.034136 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:40.587242 kubelet[2559]: I0124 00:30:40.587124 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17985068-d78a-4b83-a3e4-03169b35b936" path="/var/lib/kubelet/pods/17985068-d78a-4b83-a3e4-03169b35b936/volumes" Jan 24 00:30:40.893727 kubelet[2559]: E0124 00:30:40.893538 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:40.894892 kubelet[2559]: E0124 00:30:40.894792 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:30:40.903988 systemd-networkd[1390]: cali45fd29d4eb5: Gained IPv6LL Jan 24 00:30:41.249751 kernel: bpftool[4174]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:30:41.531076 systemd-networkd[1390]: vxlan.calico: Link UP Jan 24 00:30:41.531089 systemd-networkd[1390]: vxlan.calico: Gained carrier Jan 24 00:30:42.584440 containerd[1464]: time="2026-01-24T00:30:42.584388277Z" level=info msg="StopPodSandbox for \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\"" Jan 24 00:30:42.586806 containerd[1464]: time="2026-01-24T00:30:42.585822380Z" level=info msg="StopPodSandbox for \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\"" Jan 24 00:30:42.586806 containerd[1464]: time="2026-01-24T00:30:42.585927369Z" level=info msg="StopPodSandbox for \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\"" Jan 24 00:30:42.586806 containerd[1464]: time="2026-01-24T00:30:42.585313987Z" level=info msg="StopPodSandbox for \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\"" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.670 [INFO][4288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.671 [INFO][4288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" iface="eth0" netns="/var/run/netns/cni-de00d56b-e786-fbbf-1af5-daaca67c39aa" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.671 [INFO][4288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" iface="eth0" netns="/var/run/netns/cni-de00d56b-e786-fbbf-1af5-daaca67c39aa" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.672 [INFO][4288] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" iface="eth0" netns="/var/run/netns/cni-de00d56b-e786-fbbf-1af5-daaca67c39aa" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.672 [INFO][4288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.673 [INFO][4288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.726 [INFO][4323] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.729 [INFO][4323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.729 [INFO][4323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.736 [WARNING][4323] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.736 [INFO][4323] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.740 [INFO][4323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:42.754548 containerd[1464]: 2026-01-24 00:30:42.746 [INFO][4288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:30:42.757504 systemd[1]: run-netns-cni\x2dde00d56b\x2de786\x2dfbbf\x2d1af5\x2ddaaca67c39aa.mount: Deactivated successfully. Jan 24 00:30:42.770538 containerd[1464]: time="2026-01-24T00:30:42.770014211Z" level=info msg="TearDown network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\" successfully" Jan 24 00:30:42.770538 containerd[1464]: time="2026-01-24T00:30:42.770049306Z" level=info msg="StopPodSandbox for \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\" returns successfully" Jan 24 00:30:42.779123 containerd[1464]: time="2026-01-24T00:30:42.778695341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frc8p,Uid:40661dc3-d91f-42a2-a397-77dbe1e37cee,Namespace:calico-system,Attempt:1,}" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.681 [INFO][4302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.681 [INFO][4302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" iface="eth0" netns="/var/run/netns/cni-c854146f-b5bd-8e4d-a964-d40685aac181" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.682 [INFO][4302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" iface="eth0" netns="/var/run/netns/cni-c854146f-b5bd-8e4d-a964-d40685aac181" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.682 [INFO][4302] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" iface="eth0" netns="/var/run/netns/cni-c854146f-b5bd-8e4d-a964-d40685aac181" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.682 [INFO][4302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.682 [INFO][4302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.742 [INFO][4330] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.742 [INFO][4330] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.742 [INFO][4330] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.757 [WARNING][4330] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.758 [INFO][4330] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.770 [INFO][4330] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:42.780825 containerd[1464]: 2026-01-24 00:30:42.776 [INFO][4302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:30:42.782137 containerd[1464]: time="2026-01-24T00:30:42.781196205Z" level=info msg="TearDown network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\" successfully" Jan 24 00:30:42.782137 containerd[1464]: time="2026-01-24T00:30:42.781225169Z" level=info msg="StopPodSandbox for \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\" returns successfully" Jan 24 00:30:42.784444 systemd[1]: run-netns-cni\x2dc854146f\x2db5bd\x2d8e4d\x2da964\x2dd40685aac181.mount: Deactivated successfully. Jan 24 00:30:42.786463 containerd[1464]: time="2026-01-24T00:30:42.786141788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xsh9r,Uid:9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06,Namespace:calico-system,Attempt:1,}" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.696 [INFO][4299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.696 [INFO][4299] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" iface="eth0" netns="/var/run/netns/cni-c5f1b811-c3ed-a2f9-964e-c2dda45572a0" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.696 [INFO][4299] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" iface="eth0" netns="/var/run/netns/cni-c5f1b811-c3ed-a2f9-964e-c2dda45572a0" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.696 [INFO][4299] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" iface="eth0" netns="/var/run/netns/cni-c5f1b811-c3ed-a2f9-964e-c2dda45572a0" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.696 [INFO][4299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.696 [INFO][4299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.791 [INFO][4336] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.792 [INFO][4336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.794 [INFO][4336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.807 [WARNING][4336] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.807 [INFO][4336] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.810 [INFO][4336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:42.823298 containerd[1464]: 2026-01-24 00:30:42.816 [INFO][4299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:30:42.824997 containerd[1464]: time="2026-01-24T00:30:42.823958550Z" level=info msg="TearDown network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\" successfully" Jan 24 00:30:42.824997 containerd[1464]: time="2026-01-24T00:30:42.823983346Z" level=info msg="StopPodSandbox for \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\" returns successfully" Jan 24 00:30:42.828071 systemd[1]: run-netns-cni\x2dc5f1b811\x2dc3ed\x2da2f9\x2d964e\x2dc2dda45572a0.mount: Deactivated successfully. Jan 24 00:30:42.833994 containerd[1464]: time="2026-01-24T00:30:42.833545747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-25rx6,Uid:231ec5ca-b889-4270-8427-6227d170b1c8,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.714 [INFO][4294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.715 [INFO][4294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" iface="eth0" netns="/var/run/netns/cni-d4133bc0-4e09-a8a3-d748-a1d94e183c28" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.715 [INFO][4294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" iface="eth0" netns="/var/run/netns/cni-d4133bc0-4e09-a8a3-d748-a1d94e183c28" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.716 [INFO][4294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" iface="eth0" netns="/var/run/netns/cni-d4133bc0-4e09-a8a3-d748-a1d94e183c28" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.716 [INFO][4294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.716 [INFO][4294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.800 [INFO][4344] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.801 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.810 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.826 [WARNING][4344] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.826 [INFO][4344] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.830 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:42.840450 containerd[1464]: 2026-01-24 00:30:42.836 [INFO][4294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:30:42.842311 containerd[1464]: time="2026-01-24T00:30:42.842019232Z" level=info msg="TearDown network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\" successfully" Jan 24 00:30:42.842311 containerd[1464]: time="2026-01-24T00:30:42.842133833Z" level=info msg="StopPodSandbox for \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\" returns successfully" Jan 24 00:30:42.845570 kubelet[2559]: E0124 00:30:42.845488 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:42.847809 containerd[1464]: time="2026-01-24T00:30:42.847417621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4tb,Uid:5f9c89a8-9a30-4332-8094-e2b372cfff86,Namespace:kube-system,Attempt:1,}" Jan 24 00:30:43.012524 systemd-networkd[1390]: calid467a2d6e80: Link UP Jan 24 00:30:43.014692 systemd-networkd[1390]: calid467a2d6e80: Gained carrier Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.869 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--frc8p-eth0 csi-node-driver- calico-system 40661dc3-d91f-42a2-a397-77dbe1e37cee 957 0 2026-01-24 00:30:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-frc8p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid467a2d6e80 [] [] }} ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.870 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.934 [INFO][4391] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" HandleID="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.934 [INFO][4391] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" HandleID="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001bb9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-frc8p", "timestamp":"2026-01-24 00:30:42.934327605 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.935 [INFO][4391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.935 [INFO][4391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.935 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.954 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.965 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.971 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.974 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.981 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.981 [INFO][4391] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.984 [INFO][4391] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0 Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:42.988 [INFO][4391] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:43.000 [INFO][4391] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:43.000 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" host="localhost" Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:43.000 [INFO][4391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:43.035149 containerd[1464]: 2026-01-24 00:30:43.000 [INFO][4391] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" HandleID="k8s-pod-network.4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:43.035827 containerd[1464]: 2026-01-24 00:30:43.008 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--frc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40661dc3-d91f-42a2-a397-77dbe1e37cee", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-frc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid467a2d6e80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.035827 containerd[1464]: 2026-01-24 00:30:43.008 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:43.035827 containerd[1464]: 2026-01-24 00:30:43.008 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid467a2d6e80 ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:43.035827 containerd[1464]: 2026-01-24 00:30:43.017 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:43.035827 containerd[1464]: 2026-01-24 00:30:43.018 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--frc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40661dc3-d91f-42a2-a397-77dbe1e37cee", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0", Pod:"csi-node-driver-frc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid467a2d6e80", MAC:"16:07:78:9c:8d:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.035827 containerd[1464]: 2026-01-24 00:30:43.030 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0" Namespace="calico-system" Pod="csi-node-driver-frc8p" WorkloadEndpoint="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:30:43.062323 containerd[1464]: time="2026-01-24T00:30:43.062188348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:43.062966 containerd[1464]: time="2026-01-24T00:30:43.062751032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:43.062966 containerd[1464]: time="2026-01-24T00:30:43.062768122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.062966 containerd[1464]: time="2026-01-24T00:30:43.062877694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.097041 systemd[1]: Started cri-containerd-4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0.scope - libcontainer container 4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0. Jan 24 00:30:43.109201 systemd-networkd[1390]: cali93ded120bd4: Link UP Jan 24 00:30:43.109961 systemd-networkd[1390]: cali93ded120bd4: Gained carrier Jan 24 00:30:43.121835 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:42.905 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--xsh9r-eth0 goldmane-7c778bb748- calico-system 9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06 958 0 2026-01-24 00:30:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-xsh9r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali93ded120bd4 [] [] }} ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:42.906 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:42.962 [INFO][4414] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" HandleID="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:42.962 [INFO][4414] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" HandleID="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-xsh9r", "timestamp":"2026-01-24 00:30:42.962274098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:42.962 [INFO][4414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.000 [INFO][4414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.001 [INFO][4414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.052 [INFO][4414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.066 [INFO][4414] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.074 [INFO][4414] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.076 [INFO][4414] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.079 [INFO][4414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.079 [INFO][4414] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.081 [INFO][4414] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2 Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.087 [INFO][4414] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.095 [INFO][4414] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.096 [INFO][4414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" host="localhost" Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.096 [INFO][4414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:43.132120 containerd[1464]: 2026-01-24 00:30:43.096 [INFO][4414] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" HandleID="k8s-pod-network.9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:43.132763 containerd[1464]: 2026-01-24 00:30:43.102 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xsh9r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-xsh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93ded120bd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.132763 containerd[1464]: 2026-01-24 00:30:43.102 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:43.132763 containerd[1464]: 2026-01-24 00:30:43.102 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93ded120bd4 ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:43.132763 containerd[1464]: 2026-01-24 00:30:43.111 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:43.132763 containerd[1464]: 2026-01-24 00:30:43.111 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xsh9r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2", Pod:"goldmane-7c778bb748-xsh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93ded120bd4", MAC:"f2:df:f3:ad:ad:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.132763 containerd[1464]: 2026-01-24 00:30:43.126 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2" Namespace="calico-system" Pod="goldmane-7c778bb748-xsh9r" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:30:43.149087 containerd[1464]: time="2026-01-24T00:30:43.149019140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frc8p,Uid:40661dc3-d91f-42a2-a397-77dbe1e37cee,Namespace:calico-system,Attempt:1,} returns sandbox id \"4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0\"" Jan 24 00:30:43.152901 containerd[1464]: time="2026-01-24T00:30:43.152729215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:30:43.186331 containerd[1464]: time="2026-01-24T00:30:43.186102679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:43.187949 containerd[1464]: time="2026-01-24T00:30:43.187699545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:43.187949 containerd[1464]: time="2026-01-24T00:30:43.187731523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.187949 containerd[1464]: time="2026-01-24T00:30:43.187822290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.216388 systemd[1]: Started cri-containerd-9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2.scope - libcontainer container 9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2. Jan 24 00:30:43.227426 systemd-networkd[1390]: calic6798686840: Link UP Jan 24 00:30:43.228993 containerd[1464]: time="2026-01-24T00:30:43.228935715Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:43.229311 systemd-networkd[1390]: calic6798686840: Gained carrier Jan 24 00:30:43.231171 containerd[1464]: time="2026-01-24T00:30:43.231126240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:30:43.231454 containerd[1464]: time="2026-01-24T00:30:43.231338300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:30:43.231852 kubelet[2559]: E0124 00:30:43.231794 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:43.231852 kubelet[2559]: E0124 00:30:43.231854 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:43.231979 kubelet[2559]: E0124 00:30:43.231921 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:43.235103 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:43.239727 containerd[1464]: time="2026-01-24T00:30:43.239562402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:42.949 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0 calico-apiserver-6896bc5cbd- calico-apiserver 231ec5ca-b889-4270-8427-6227d170b1c8 959 0 2026-01-24 00:30:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6896bc5cbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6896bc5cbd-25rx6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic6798686840 [] [] }} ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:42.949 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.002 [INFO][4430] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" HandleID="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.003 [INFO][4430] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" HandleID="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6896bc5cbd-25rx6", "timestamp":"2026-01-24 00:30:43.002923584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.003 [INFO][4430] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.096 [INFO][4430] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.097 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.155 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.168 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.175 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.182 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.186 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.186 [INFO][4430] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.190 [INFO][4430] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.197 [INFO][4430] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.206 [INFO][4430] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.208 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" host="localhost" Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.210 [INFO][4430] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:43.246206 containerd[1464]: 2026-01-24 00:30:43.211 [INFO][4430] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" HandleID="k8s-pod-network.6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:43.247352 containerd[1464]: 2026-01-24 00:30:43.216 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"231ec5ca-b889-4270-8427-6227d170b1c8", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6896bc5cbd-25rx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6798686840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.247352 containerd[1464]: 2026-01-24 00:30:43.216 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:43.247352 containerd[1464]: 2026-01-24 00:30:43.216 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6798686840 ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:43.247352 containerd[1464]: 2026-01-24 00:30:43.230 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:43.247352 containerd[1464]: 2026-01-24 00:30:43.232 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"231ec5ca-b889-4270-8427-6227d170b1c8", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a", Pod:"calico-apiserver-6896bc5cbd-25rx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6798686840", MAC:"2a:4a:ab:16:d4:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.247352 containerd[1464]: 2026-01-24 00:30:43.243 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-25rx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:30:43.299346 containerd[1464]: time="2026-01-24T00:30:43.297096471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:43.299346 containerd[1464]: time="2026-01-24T00:30:43.297152223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:43.299346 containerd[1464]: time="2026-01-24T00:30:43.297162633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.299346 containerd[1464]: time="2026-01-24T00:30:43.297287652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.305732 containerd[1464]: time="2026-01-24T00:30:43.305479152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xsh9r,Uid:9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2\"" Jan 24 00:30:43.322415 containerd[1464]: time="2026-01-24T00:30:43.322162738Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:43.324145 containerd[1464]: time="2026-01-24T00:30:43.324107171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:30:43.324381 containerd[1464]: time="2026-01-24T00:30:43.324171714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:30:43.324887 kubelet[2559]: E0124 00:30:43.324797 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:43.324887 kubelet[2559]: E0124 00:30:43.324849 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:43.325428 kubelet[2559]: E0124 00:30:43.325017 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:43.325428 kubelet[2559]: E0124 00:30:43.325059 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:43.327703 containerd[1464]: time="2026-01-24T00:30:43.327448030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:30:43.332699 systemd[1]: Started cri-containerd-6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a.scope - libcontainer container 6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a. Jan 24 00:30:43.338342 systemd-networkd[1390]: calia6f9f710a6e: Link UP Jan 24 00:30:43.339831 systemd-networkd[1390]: calia6f9f710a6e: Gained carrier Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:42.982 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--tn4tb-eth0 coredns-66bc5c9577- kube-system 5f9c89a8-9a30-4332-8094-e2b372cfff86 960 0 2026-01-24 00:30:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-tn4tb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6f9f710a6e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:42.983 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.034 [INFO][4438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" HandleID="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.035 [INFO][4438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" HandleID="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-tn4tb", "timestamp":"2026-01-24 00:30:43.034768298 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.035 [INFO][4438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.213 [INFO][4438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.214 [INFO][4438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.254 [INFO][4438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.271 [INFO][4438] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.283 [INFO][4438] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.287 [INFO][4438] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.294 [INFO][4438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.294 [INFO][4438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.296 [INFO][4438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.304 [INFO][4438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.316 [INFO][4438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.317 [INFO][4438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" host="localhost" Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.317 [INFO][4438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:43.363552 containerd[1464]: 2026-01-24 00:30:43.317 [INFO][4438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" HandleID="k8s-pod-network.804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:43.364255 containerd[1464]: 2026-01-24 00:30:43.325 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tn4tb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5f9c89a8-9a30-4332-8094-e2b372cfff86", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-tn4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6f9f710a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.364255 containerd[1464]: 2026-01-24 00:30:43.333 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:43.364255 containerd[1464]: 2026-01-24 00:30:43.333 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6f9f710a6e ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:43.364255 containerd[1464]: 2026-01-24 00:30:43.339 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:43.364255 containerd[1464]: 2026-01-24 00:30:43.340 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tn4tb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5f9c89a8-9a30-4332-8094-e2b372cfff86", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c", Pod:"coredns-66bc5c9577-tn4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6f9f710a6e", MAC:"1a:b4:d5:55:90:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:43.364255 containerd[1464]: 2026-01-24 00:30:43.357 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4tb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:30:43.377829 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:43.394766 containerd[1464]: time="2026-01-24T00:30:43.392982525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:43.394766 containerd[1464]: time="2026-01-24T00:30:43.393034730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:43.394766 containerd[1464]: time="2026-01-24T00:30:43.393047393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.394766 containerd[1464]: time="2026-01-24T00:30:43.393151936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:43.399998 containerd[1464]: time="2026-01-24T00:30:43.399572781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:43.401751 containerd[1464]: time="2026-01-24T00:30:43.401576052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:30:43.401864 containerd[1464]: time="2026-01-24T00:30:43.401798310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:43.402272 kubelet[2559]: E0124 00:30:43.402161 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:43.402272 kubelet[2559]: E0124 00:30:43.402238 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:43.402355 kubelet[2559]: E0124 00:30:43.402326 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xsh9r_calico-system(9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:43.402432 kubelet[2559]: E0124 00:30:43.402368 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:30:43.431935 systemd[1]: Started cri-containerd-804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c.scope - libcontainer container 804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c. Jan 24 00:30:43.434278 containerd[1464]: time="2026-01-24T00:30:43.434189838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-25rx6,Uid:231ec5ca-b889-4270-8427-6227d170b1c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a\"" Jan 24 00:30:43.437647 containerd[1464]: time="2026-01-24T00:30:43.437541516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:43.458660 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:43.494703 containerd[1464]: time="2026-01-24T00:30:43.494527428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4tb,Uid:5f9c89a8-9a30-4332-8094-e2b372cfff86,Namespace:kube-system,Attempt:1,} returns sandbox id \"804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c\"" Jan 24 00:30:43.495641 kubelet[2559]: E0124 00:30:43.495538 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:43.502315 containerd[1464]: time="2026-01-24T00:30:43.502247415Z" level=info msg="CreateContainer within sandbox \"804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:30:43.519413 containerd[1464]: time="2026-01-24T00:30:43.519323295Z" level=info msg="CreateContainer within sandbox \"804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b3c97f1e272a43be00719ea39efd4d724d230830ad070cea704eb2d3cac2ce3\"" Jan 24 00:30:43.520368 containerd[1464]: time="2026-01-24T00:30:43.520337997Z" level=info msg="StartContainer for \"0b3c97f1e272a43be00719ea39efd4d724d230830ad070cea704eb2d3cac2ce3\"" Jan 24 00:30:43.527894 systemd-networkd[1390]: vxlan.calico: Gained IPv6LL Jan 24 00:30:43.534781 containerd[1464]: time="2026-01-24T00:30:43.534736697Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:43.536312 containerd[1464]: time="2026-01-24T00:30:43.536196129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:43.536312 containerd[1464]: time="2026-01-24T00:30:43.536270355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:43.536477 kubelet[2559]: E0124 00:30:43.536427 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:43.536477 kubelet[2559]: E0124 00:30:43.536489 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:43.536569 kubelet[2559]: E0124 00:30:43.536553 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-25rx6_calico-apiserver(231ec5ca-b889-4270-8427-6227d170b1c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:43.536749 kubelet[2559]: E0124 00:30:43.536642 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:30:43.560846 systemd[1]: Started cri-containerd-0b3c97f1e272a43be00719ea39efd4d724d230830ad070cea704eb2d3cac2ce3.scope - libcontainer container 0b3c97f1e272a43be00719ea39efd4d724d230830ad070cea704eb2d3cac2ce3. Jan 24 00:30:43.591112 containerd[1464]: time="2026-01-24T00:30:43.591054824Z" level=info msg="StartContainer for \"0b3c97f1e272a43be00719ea39efd4d724d230830ad070cea704eb2d3cac2ce3\" returns successfully" Jan 24 00:30:43.762573 systemd[1]: run-netns-cni\x2dd4133bc0\x2d4e09\x2da8a3\x2dd748\x2da1d94e183c28.mount: Deactivated successfully. Jan 24 00:30:43.904004 kubelet[2559]: E0124 00:30:43.903769 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:43.905818 kubelet[2559]: E0124 00:30:43.905025 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:43.908976 kubelet[2559]: E0124 00:30:43.908901 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:30:43.911912 kubelet[2559]: E0124 00:30:43.911854 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:30:43.971174 kubelet[2559]: I0124 00:30:43.969862 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tn4tb" podStartSLOduration=34.969842105 podStartE2EDuration="34.969842105s" podCreationTimestamp="2026-01-24 00:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:43.961497143 +0000 UTC m=+41.524064410" watchObservedRunningTime="2026-01-24 00:30:43.969842105 +0000 UTC m=+41.532409372" Jan 24 00:30:44.168919 systemd-networkd[1390]: cali93ded120bd4: Gained IPv6LL Jan 24 00:30:44.584412 containerd[1464]: time="2026-01-24T00:30:44.583520882Z" level=info msg="StopPodSandbox for \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\"" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.634 [INFO][4703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.634 [INFO][4703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" iface="eth0" netns="/var/run/netns/cni-9ef8b818-febb-fb84-bc5c-b5f971b6cdcf" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.635 [INFO][4703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" iface="eth0" netns="/var/run/netns/cni-9ef8b818-febb-fb84-bc5c-b5f971b6cdcf" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.635 [INFO][4703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" iface="eth0" netns="/var/run/netns/cni-9ef8b818-febb-fb84-bc5c-b5f971b6cdcf" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.635 [INFO][4703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.635 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.654 [INFO][4712] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.655 [INFO][4712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.655 [INFO][4712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.663 [WARNING][4712] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.663 [INFO][4712] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.665 [INFO][4712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:44.669469 containerd[1464]: 2026-01-24 00:30:44.667 [INFO][4703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:30:44.670198 containerd[1464]: time="2026-01-24T00:30:44.669737658Z" level=info msg="TearDown network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\" successfully" Jan 24 00:30:44.670198 containerd[1464]: time="2026-01-24T00:30:44.669763696Z" level=info msg="StopPodSandbox for \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\" returns successfully" Jan 24 00:30:44.672224 systemd[1]: run-netns-cni\x2d9ef8b818\x2dfebb\x2dfb84\x2dbc5c\x2db5f971b6cdcf.mount: Deactivated successfully. Jan 24 00:30:44.675441 containerd[1464]: time="2026-01-24T00:30:44.675350707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-7zn99,Uid:450f5953-3642-4725-b413-4ccd0b446f9a,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:30:44.680784 systemd-networkd[1390]: calic6798686840: Gained IPv6LL Jan 24 00:30:44.682564 systemd-networkd[1390]: calid467a2d6e80: Gained IPv6LL Jan 24 00:30:44.791440 systemd-networkd[1390]: cali0198d02e7e3: Link UP Jan 24 00:30:44.791769 systemd-networkd[1390]: cali0198d02e7e3: Gained carrier Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.719 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0 calico-apiserver-6896bc5cbd- calico-apiserver 450f5953-3642-4725-b413-4ccd0b446f9a 1016 0 2026-01-24 00:30:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6896bc5cbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6896bc5cbd-7zn99 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0198d02e7e3 [] [] }} ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.719 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.742 [INFO][4733] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" HandleID="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.743 [INFO][4733] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" HandleID="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6896bc5cbd-7zn99", "timestamp":"2026-01-24 00:30:44.742917909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.743 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.743 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.743 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.752 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.759 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.765 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.767 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.771 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.771 [INFO][4733] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.773 [INFO][4733] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72 Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.778 [INFO][4733] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.784 [INFO][4733] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.784 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" host="localhost" Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.784 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:44.807800 containerd[1464]: 2026-01-24 00:30:44.784 [INFO][4733] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" HandleID="k8s-pod-network.0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.808389 containerd[1464]: 2026-01-24 00:30:44.787 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"450f5953-3642-4725-b413-4ccd0b446f9a", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6896bc5cbd-7zn99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0198d02e7e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:44.808389 containerd[1464]: 2026-01-24 00:30:44.787 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.808389 containerd[1464]: 2026-01-24 00:30:44.787 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0198d02e7e3 ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.808389 containerd[1464]: 2026-01-24 00:30:44.791 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.808389 containerd[1464]: 2026-01-24 00:30:44.791 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"450f5953-3642-4725-b413-4ccd0b446f9a", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72", Pod:"calico-apiserver-6896bc5cbd-7zn99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0198d02e7e3", MAC:"2e:86:0c:53:bc:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:44.808389 containerd[1464]: 2026-01-24 00:30:44.802 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72" Namespace="calico-apiserver" Pod="calico-apiserver-6896bc5cbd-7zn99" WorkloadEndpoint="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:30:44.836508 containerd[1464]: time="2026-01-24T00:30:44.836284486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:44.836508 containerd[1464]: time="2026-01-24T00:30:44.836373880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:44.836508 containerd[1464]: time="2026-01-24T00:30:44.836384440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:44.837015 containerd[1464]: time="2026-01-24T00:30:44.836907011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:44.867809 systemd[1]: Started cri-containerd-0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72.scope - libcontainer container 0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72. Jan 24 00:30:44.871949 systemd-networkd[1390]: calia6f9f710a6e: Gained IPv6LL Jan 24 00:30:44.882637 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:44.918448 containerd[1464]: time="2026-01-24T00:30:44.918414820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6896bc5cbd-7zn99,Uid:450f5953-3642-4725-b413-4ccd0b446f9a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72\"" Jan 24 00:30:44.920403 kubelet[2559]: E0124 00:30:44.919512 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:44.920403 kubelet[2559]: E0124 00:30:44.920257 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:30:44.921239 containerd[1464]: time="2026-01-24T00:30:44.920209089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:44.921306 kubelet[2559]: E0124 00:30:44.920804 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:30:44.922129 kubelet[2559]: E0124 00:30:44.922086 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:30:44.987894 containerd[1464]: time="2026-01-24T00:30:44.987798895Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:44.989337 containerd[1464]: time="2026-01-24T00:30:44.989254411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:44.989337 containerd[1464]: time="2026-01-24T00:30:44.989292319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:44.989797 kubelet[2559]: E0124 00:30:44.989547 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:44.989797 kubelet[2559]: E0124 00:30:44.989644 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:44.989797 kubelet[2559]: E0124 00:30:44.989705 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-7zn99_calico-apiserver(450f5953-3642-4725-b413-4ccd0b446f9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:44.989797 kubelet[2559]: E0124 00:30:44.989736 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:30:45.587367 containerd[1464]: time="2026-01-24T00:30:45.586985907Z" level=info msg="StopPodSandbox for \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\"" Jan 24 00:30:45.587367 containerd[1464]: time="2026-01-24T00:30:45.587231751Z" level=info msg="StopPodSandbox for \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\"" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.662 [INFO][4813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.662 [INFO][4813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" iface="eth0" netns="/var/run/netns/cni-8df8c521-1eea-00e7-1450-291808098e7c" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.663 [INFO][4813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" iface="eth0" netns="/var/run/netns/cni-8df8c521-1eea-00e7-1450-291808098e7c" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.664 [INFO][4813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" iface="eth0" netns="/var/run/netns/cni-8df8c521-1eea-00e7-1450-291808098e7c" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.664 [INFO][4813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.664 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.737 [INFO][4829] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.737 [INFO][4829] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.737 [INFO][4829] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.767 [WARNING][4829] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.768 [INFO][4829] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.772 [INFO][4829] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:45.785245 containerd[1464]: 2026-01-24 00:30:45.777 [INFO][4813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:30:45.788412 containerd[1464]: time="2026-01-24T00:30:45.788273602Z" level=info msg="TearDown network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\" successfully" Jan 24 00:30:45.788412 containerd[1464]: time="2026-01-24T00:30:45.788334455Z" level=info msg="StopPodSandbox for \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\" returns successfully" Jan 24 00:30:45.790733 systemd[1]: run-netns-cni\x2d8df8c521\x2d1eea\x2d00e7\x2d1450\x2d291808098e7c.mount: Deactivated successfully. Jan 24 00:30:45.795127 kubelet[2559]: E0124 00:30:45.793752 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:45.796092 containerd[1464]: time="2026-01-24T00:30:45.795825436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ppq4k,Uid:db37e332-c065-4a9f-995f-7513b669f795,Namespace:kube-system,Attempt:1,}" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.680 [INFO][4814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.681 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" iface="eth0" netns="/var/run/netns/cni-32459293-5c9d-0ca5-a2f7-2a4c00d5482f" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.681 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" iface="eth0" netns="/var/run/netns/cni-32459293-5c9d-0ca5-a2f7-2a4c00d5482f" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.684 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" iface="eth0" netns="/var/run/netns/cni-32459293-5c9d-0ca5-a2f7-2a4c00d5482f" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.684 [INFO][4814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.684 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.735 [INFO][4835] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.738 [INFO][4835] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.772 [INFO][4835] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.790 [WARNING][4835] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.790 [INFO][4835] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.795 [INFO][4835] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:45.802751 containerd[1464]: 2026-01-24 00:30:45.799 [INFO][4814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:30:45.804023 containerd[1464]: time="2026-01-24T00:30:45.803349811Z" level=info msg="TearDown network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\" successfully" Jan 24 00:30:45.804023 containerd[1464]: time="2026-01-24T00:30:45.803381850Z" level=info msg="StopPodSandbox for \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\" returns successfully" Jan 24 00:30:45.808379 systemd[1]: run-netns-cni\x2d32459293\x2d5c9d\x2d0ca5\x2da2f7\x2d2a4c00d5482f.mount: Deactivated successfully. Jan 24 00:30:45.809288 containerd[1464]: time="2026-01-24T00:30:45.808724808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc5d5c67c-njvsm,Uid:bba2b9d8-6e2d-4162-96fa-f25acd35c593,Namespace:calico-system,Attempt:1,}" Jan 24 00:30:45.932452 kubelet[2559]: E0124 00:30:45.930565 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:45.933037 kubelet[2559]: E0124 00:30:45.932469 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:30:46.004215 systemd-networkd[1390]: calid98fc985786: Link UP Jan 24 00:30:46.005447 systemd-networkd[1390]: calid98fc985786: Gained carrier Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.876 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--ppq4k-eth0 coredns-66bc5c9577- kube-system db37e332-c065-4a9f-995f-7513b669f795 1043 0 2026-01-24 00:30:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-ppq4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid98fc985786 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.877 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.930 [INFO][4878] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" HandleID="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.930 [INFO][4878] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" HandleID="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bea80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-ppq4k", "timestamp":"2026-01-24 00:30:45.930671484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.930 [INFO][4878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.931 [INFO][4878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.931 [INFO][4878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.940 [INFO][4878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.947 [INFO][4878] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.964 [INFO][4878] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.968 [INFO][4878] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.972 [INFO][4878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.972 [INFO][4878] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.978 [INFO][4878] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810 Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.986 [INFO][4878] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.995 [INFO][4878] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.995 [INFO][4878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" host="localhost" Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.995 [INFO][4878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:46.024163 containerd[1464]: 2026-01-24 00:30:45.995 [INFO][4878] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" HandleID="k8s-pod-network.3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:46.025396 containerd[1464]: 2026-01-24 00:30:45.997 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ppq4k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db37e332-c065-4a9f-995f-7513b669f795", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-ppq4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid98fc985786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:46.025396 containerd[1464]: 2026-01-24 00:30:45.997 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:46.025396 containerd[1464]: 2026-01-24 00:30:45.997 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid98fc985786 ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:46.025396 containerd[1464]: 2026-01-24 00:30:46.008 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:46.025396 containerd[1464]: 2026-01-24 00:30:46.009 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ppq4k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db37e332-c065-4a9f-995f-7513b669f795", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810", Pod:"coredns-66bc5c9577-ppq4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid98fc985786", MAC:"1a:33:c1:55:32:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:46.025396 containerd[1464]: 2026-01-24 00:30:46.020 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810" Namespace="kube-system" Pod="coredns-66bc5c9577-ppq4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:30:46.069650 containerd[1464]: time="2026-01-24T00:30:46.068217122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:46.069650 containerd[1464]: time="2026-01-24T00:30:46.068286570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:46.069650 containerd[1464]: time="2026-01-24T00:30:46.068300836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:46.069650 containerd[1464]: time="2026-01-24T00:30:46.068384299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:46.107433 systemd-networkd[1390]: calie64f790dbff: Link UP Jan 24 00:30:46.108929 systemd-networkd[1390]: calie64f790dbff: Gained carrier Jan 24 00:30:46.116506 systemd[1]: Started cri-containerd-3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810.scope - libcontainer container 3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810. Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:45.890 [INFO][4859] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0 calico-kube-controllers-dc5d5c67c- calico-system bba2b9d8-6e2d-4162-96fa-f25acd35c593 1044 0 2026-01-24 00:30:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dc5d5c67c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dc5d5c67c-njvsm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie64f790dbff [] [] }} ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:45.890 [INFO][4859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:45.943 [INFO][4885] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" HandleID="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:45.943 [INFO][4885] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" HandleID="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004345c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dc5d5c67c-njvsm", "timestamp":"2026-01-24 00:30:45.943032878 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:45.943 [INFO][4885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:45.997 [INFO][4885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:45.997 [INFO][4885] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.041 [INFO][4885] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.049 [INFO][4885] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.060 [INFO][4885] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.065 [INFO][4885] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.073 [INFO][4885] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.073 [INFO][4885] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.076 [INFO][4885] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444 Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.086 [INFO][4885] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.097 [INFO][4885] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.097 [INFO][4885] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" host="localhost" Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.097 [INFO][4885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:46.138789 containerd[1464]: 2026-01-24 00:30:46.097 [INFO][4885] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" HandleID="k8s-pod-network.1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:46.140557 containerd[1464]: 2026-01-24 00:30:46.102 [INFO][4859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0", GenerateName:"calico-kube-controllers-dc5d5c67c-", Namespace:"calico-system", SelfLink:"", UID:"bba2b9d8-6e2d-4162-96fa-f25acd35c593", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc5d5c67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dc5d5c67c-njvsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie64f790dbff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:46.140557 containerd[1464]: 2026-01-24 00:30:46.102 [INFO][4859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:46.140557 containerd[1464]: 2026-01-24 00:30:46.102 [INFO][4859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie64f790dbff ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:46.140557 containerd[1464]: 2026-01-24 00:30:46.114 [INFO][4859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:46.140557 containerd[1464]: 2026-01-24 00:30:46.115 [INFO][4859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0", GenerateName:"calico-kube-controllers-dc5d5c67c-", Namespace:"calico-system", SelfLink:"", UID:"bba2b9d8-6e2d-4162-96fa-f25acd35c593", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc5d5c67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444", Pod:"calico-kube-controllers-dc5d5c67c-njvsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie64f790dbff", MAC:"7e:89:15:c9:e3:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:46.140557 containerd[1464]: 2026-01-24 00:30:46.131 [INFO][4859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444" Namespace="calico-system" Pod="calico-kube-controllers-dc5d5c67c-njvsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:30:46.142875 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:46.195669 containerd[1464]: time="2026-01-24T00:30:46.193766797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:46.197093 containerd[1464]: time="2026-01-24T00:30:46.197046103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:46.197873 containerd[1464]: time="2026-01-24T00:30:46.197224070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:46.198531 containerd[1464]: time="2026-01-24T00:30:46.198074345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:46.200489 containerd[1464]: time="2026-01-24T00:30:46.200030285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ppq4k,Uid:db37e332-c065-4a9f-995f-7513b669f795,Namespace:kube-system,Attempt:1,} returns sandbox id \"3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810\"" Jan 24 00:30:46.202223 kubelet[2559]: E0124 00:30:46.201891 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:46.214217 containerd[1464]: time="2026-01-24T00:30:46.214065557Z" level=info msg="CreateContainer within sandbox \"3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:30:46.216823 systemd-networkd[1390]: cali0198d02e7e3: Gained IPv6LL Jan 24 00:30:46.237711 containerd[1464]: time="2026-01-24T00:30:46.237501321Z" level=info msg="CreateContainer within sandbox \"3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0de6e6c60504ca0f7f5f749f0a92b5bd2461f2f6572044c78901029447dc95b7\"" Jan 24 00:30:46.238077 systemd[1]: Started cri-containerd-1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444.scope - libcontainer container 1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444. Jan 24 00:30:46.238287 containerd[1464]: time="2026-01-24T00:30:46.238254800Z" level=info msg="StartContainer for \"0de6e6c60504ca0f7f5f749f0a92b5bd2461f2f6572044c78901029447dc95b7\"" Jan 24 00:30:46.260976 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:30:46.283833 systemd[1]: Started cri-containerd-0de6e6c60504ca0f7f5f749f0a92b5bd2461f2f6572044c78901029447dc95b7.scope - libcontainer container 0de6e6c60504ca0f7f5f749f0a92b5bd2461f2f6572044c78901029447dc95b7. Jan 24 00:30:46.308809 containerd[1464]: time="2026-01-24T00:30:46.308724861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc5d5c67c-njvsm,Uid:bba2b9d8-6e2d-4162-96fa-f25acd35c593,Namespace:calico-system,Attempt:1,} returns sandbox id \"1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444\"" Jan 24 00:30:46.311479 containerd[1464]: time="2026-01-24T00:30:46.311269986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:30:46.323457 containerd[1464]: time="2026-01-24T00:30:46.323424356Z" level=info msg="StartContainer for \"0de6e6c60504ca0f7f5f749f0a92b5bd2461f2f6572044c78901029447dc95b7\" returns successfully" Jan 24 00:30:46.372529 containerd[1464]: time="2026-01-24T00:30:46.372486004Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:46.374257 containerd[1464]: time="2026-01-24T00:30:46.374150918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:30:46.374353 containerd[1464]: time="2026-01-24T00:30:46.374226379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:30:46.374551 kubelet[2559]: E0124 00:30:46.374429 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:46.374551 kubelet[2559]: E0124 00:30:46.374525 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:46.374895 kubelet[2559]: E0124 00:30:46.374840 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-dc5d5c67c-njvsm_calico-system(bba2b9d8-6e2d-4162-96fa-f25acd35c593): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:46.374987 kubelet[2559]: E0124 00:30:46.374891 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:30:46.935241 kubelet[2559]: E0124 00:30:46.933834 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:46.936773 kubelet[2559]: E0124 00:30:46.936697 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:30:46.936773 kubelet[2559]: E0124 00:30:46.936715 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:30:46.966851 kubelet[2559]: I0124 00:30:46.966360 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ppq4k" podStartSLOduration=37.966344829 podStartE2EDuration="37.966344829s" podCreationTimestamp="2026-01-24 00:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:46.945657486 +0000 UTC m=+44.508224753" watchObservedRunningTime="2026-01-24 00:30:46.966344829 +0000 UTC m=+44.528912096" Jan 24 00:30:47.752055 systemd-networkd[1390]: calid98fc985786: Gained IPv6LL Jan 24 00:30:47.938309 kubelet[2559]: E0124 00:30:47.938203 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:47.939435 kubelet[2559]: E0124 00:30:47.939379 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:30:48.007911 systemd-networkd[1390]: calie64f790dbff: Gained IPv6LL Jan 24 00:30:48.941961 kubelet[2559]: E0124 00:30:48.941852 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:30:53.583916 containerd[1464]: time="2026-01-24T00:30:53.583837222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:30:53.642774 containerd[1464]: time="2026-01-24T00:30:53.642709457Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:53.644496 containerd[1464]: time="2026-01-24T00:30:53.644307253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:30:53.644496 containerd[1464]: time="2026-01-24T00:30:53.644402630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:30:53.645412 kubelet[2559]: E0124 00:30:53.644682 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:53.645412 kubelet[2559]: E0124 00:30:53.645353 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:53.646059 kubelet[2559]: E0124 00:30:53.645561 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:53.646658 containerd[1464]: time="2026-01-24T00:30:53.646552542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:30:53.714660 containerd[1464]: time="2026-01-24T00:30:53.714425725Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:53.716314 containerd[1464]: time="2026-01-24T00:30:53.716197592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:30:53.716314 containerd[1464]: time="2026-01-24T00:30:53.716274168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:30:53.716902 kubelet[2559]: E0124 00:30:53.716797 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:53.716951 kubelet[2559]: E0124 00:30:53.716897 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:53.717112 kubelet[2559]: E0124 00:30:53.717026 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:53.717184 kubelet[2559]: E0124 00:30:53.717131 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:30:55.584829 containerd[1464]: time="2026-01-24T00:30:55.584742637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:30:55.645228 containerd[1464]: time="2026-01-24T00:30:55.645120488Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:55.646870 containerd[1464]: time="2026-01-24T00:30:55.646710696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:30:55.646870 containerd[1464]: time="2026-01-24T00:30:55.646802150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:55.647130 kubelet[2559]: E0124 00:30:55.647025 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:55.647130 kubelet[2559]: E0124 00:30:55.647120 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:55.647872 kubelet[2559]: E0124 00:30:55.647223 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xsh9r_calico-system(9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:55.647872 kubelet[2559]: E0124 00:30:55.647278 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:30:56.584809 containerd[1464]: time="2026-01-24T00:30:56.584722190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:56.643733 containerd[1464]: time="2026-01-24T00:30:56.643563735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:56.645648 containerd[1464]: time="2026-01-24T00:30:56.645519874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:56.645729 containerd[1464]: time="2026-01-24T00:30:56.645664128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:56.646021 kubelet[2559]: E0124 00:30:56.645935 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:56.646021 kubelet[2559]: E0124 00:30:56.645984 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:56.646184 kubelet[2559]: E0124 00:30:56.646081 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-25rx6_calico-apiserver(231ec5ca-b889-4270-8427-6227d170b1c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:56.646184 kubelet[2559]: E0124 00:30:56.646132 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:30:58.584031 containerd[1464]: time="2026-01-24T00:30:58.583990803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:30:58.646010 containerd[1464]: time="2026-01-24T00:30:58.645923081Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:58.647283 containerd[1464]: time="2026-01-24T00:30:58.647171651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:30:58.647366 containerd[1464]: time="2026-01-24T00:30:58.647219991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:30:58.647560 kubelet[2559]: E0124 00:30:58.647485 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:58.647560 kubelet[2559]: E0124 00:30:58.647543 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:58.648112 kubelet[2559]: E0124 00:30:58.647708 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-dc5d5c67c-njvsm_calico-system(bba2b9d8-6e2d-4162-96fa-f25acd35c593): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:58.648112 kubelet[2559]: E0124 00:30:58.647743 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:30:59.584533 containerd[1464]: time="2026-01-24T00:30:59.584321019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:30:59.651085 containerd[1464]: time="2026-01-24T00:30:59.650984783Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:59.652784 containerd[1464]: time="2026-01-24T00:30:59.652642091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:30:59.652784 containerd[1464]: time="2026-01-24T00:30:59.652709791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:30:59.653529 kubelet[2559]: E0124 00:30:59.653377 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:59.653529 kubelet[2559]: E0124 00:30:59.653488 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:59.654164 kubelet[2559]: E0124 00:30:59.653577 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:59.656103 containerd[1464]: time="2026-01-24T00:30:59.655460346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:30:59.726644 containerd[1464]: time="2026-01-24T00:30:59.726480808Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:59.728646 containerd[1464]: time="2026-01-24T00:30:59.728446681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:30:59.728741 containerd[1464]: time="2026-01-24T00:30:59.728682678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:30:59.729046 kubelet[2559]: E0124 00:30:59.728924 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:59.729046 kubelet[2559]: E0124 00:30:59.729016 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:59.729144 kubelet[2559]: E0124 00:30:59.729116 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:59.729226 kubelet[2559]: E0124 00:30:59.729177 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:31:01.585224 containerd[1464]: time="2026-01-24T00:31:01.584933582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:01.648234 containerd[1464]: time="2026-01-24T00:31:01.648162486Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:01.649554 containerd[1464]: time="2026-01-24T00:31:01.649481034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:01.649554 containerd[1464]: time="2026-01-24T00:31:01.649514010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:01.649908 kubelet[2559]: E0124 00:31:01.649824 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:01.649908 kubelet[2559]: E0124 00:31:01.649883 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:01.650268 kubelet[2559]: E0124 00:31:01.649988 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-7zn99_calico-apiserver(450f5953-3642-4725-b413-4ccd0b446f9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:01.650268 kubelet[2559]: E0124 00:31:01.650022 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:31:02.556541 containerd[1464]: time="2026-01-24T00:31:02.556461718Z" level=info msg="StopPodSandbox for \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\"" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.609 [WARNING][5075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xsh9r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2", Pod:"goldmane-7c778bb748-xsh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93ded120bd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.610 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.610 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" iface="eth0" netns="" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.610 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.610 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.642 [INFO][5086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.642 [INFO][5086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.642 [INFO][5086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.652 [WARNING][5086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.652 [INFO][5086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.654 [INFO][5086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:02.660952 containerd[1464]: 2026-01-24 00:31:02.657 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.662221 containerd[1464]: time="2026-01-24T00:31:02.660995056Z" level=info msg="TearDown network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\" successfully" Jan 24 00:31:02.662221 containerd[1464]: time="2026-01-24T00:31:02.661035953Z" level=info msg="StopPodSandbox for \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\" returns successfully" Jan 24 00:31:02.672352 containerd[1464]: time="2026-01-24T00:31:02.672143094Z" level=info msg="RemovePodSandbox for \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\"" Jan 24 00:31:02.675292 containerd[1464]: time="2026-01-24T00:31:02.675195759Z" level=info msg="Forcibly stopping sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\"" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.725 [WARNING][5103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xsh9r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b55aff64ab50db14d2f901d18c2fe12bfa86e37cb7b7e27e1d993729bfc9cc2", Pod:"goldmane-7c778bb748-xsh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93ded120bd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.726 [INFO][5103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.726 [INFO][5103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" iface="eth0" netns="" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.726 [INFO][5103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.726 [INFO][5103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.760 [INFO][5112] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.760 [INFO][5112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.760 [INFO][5112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.768 [WARNING][5112] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.768 [INFO][5112] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" HandleID="k8s-pod-network.d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Workload="localhost-k8s-goldmane--7c778bb748--xsh9r-eth0" Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.770 [INFO][5112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:02.776484 containerd[1464]: 2026-01-24 00:31:02.773 [INFO][5103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c" Jan 24 00:31:02.776484 containerd[1464]: time="2026-01-24T00:31:02.776465907Z" level=info msg="TearDown network for sandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\" successfully" Jan 24 00:31:02.784031 containerd[1464]: time="2026-01-24T00:31:02.783962985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:02.784112 containerd[1464]: time="2026-01-24T00:31:02.784058262Z" level=info msg="RemovePodSandbox \"d82be96098621de954f0ff42147748d017f95b62a43fe21e8a5afbccad7f5d6c\" returns successfully" Jan 24 00:31:02.784876 containerd[1464]: time="2026-01-24T00:31:02.784847293Z" level=info msg="StopPodSandbox for \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\"" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.820 [WARNING][5130] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" WorkloadEndpoint="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.821 [INFO][5130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.821 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" iface="eth0" netns="" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.821 [INFO][5130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.821 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.848 [INFO][5139] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.849 [INFO][5139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.849 [INFO][5139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.857 [WARNING][5139] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.857 [INFO][5139] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.859 [INFO][5139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:02.866761 containerd[1464]: 2026-01-24 00:31:02.864 [INFO][5130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.867365 containerd[1464]: time="2026-01-24T00:31:02.867243949Z" level=info msg="TearDown network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\" successfully" Jan 24 00:31:02.867365 containerd[1464]: time="2026-01-24T00:31:02.867339666Z" level=info msg="StopPodSandbox for \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\" returns successfully" Jan 24 00:31:02.868108 containerd[1464]: time="2026-01-24T00:31:02.868071533Z" level=info msg="RemovePodSandbox for \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\"" Jan 24 00:31:02.868245 containerd[1464]: time="2026-01-24T00:31:02.868114063Z" level=info msg="Forcibly stopping sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\"" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.918 [WARNING][5157] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" WorkloadEndpoint="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.918 [INFO][5157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.918 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" iface="eth0" netns="" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.919 [INFO][5157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.919 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.953 [INFO][5165] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.953 [INFO][5165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.953 [INFO][5165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.960 [WARNING][5165] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.960 [INFO][5165] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" HandleID="k8s-pod-network.0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Workload="localhost-k8s-whisker--66bb8788df--jbgmv-eth0" Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.963 [INFO][5165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:02.972668 containerd[1464]: 2026-01-24 00:31:02.967 [INFO][5157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647" Jan 24 00:31:02.972668 containerd[1464]: time="2026-01-24T00:31:02.970754407Z" level=info msg="TearDown network for sandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\" successfully" Jan 24 00:31:02.976818 containerd[1464]: time="2026-01-24T00:31:02.976727775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:02.976818 containerd[1464]: time="2026-01-24T00:31:02.976801592Z" level=info msg="RemovePodSandbox \"0292387efdbf14af91dcc88e4e1eadaa78e76ac95492c0fa8d402d526d933647\" returns successfully" Jan 24 00:31:02.977620 containerd[1464]: time="2026-01-24T00:31:02.977538780Z" level=info msg="StopPodSandbox for \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\"" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.031 [WARNING][5182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"450f5953-3642-4725-b413-4ccd0b446f9a", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72", Pod:"calico-apiserver-6896bc5cbd-7zn99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0198d02e7e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.032 [INFO][5182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.032 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" iface="eth0" netns="" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.032 [INFO][5182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.032 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.074 [INFO][5190] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.074 [INFO][5190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.075 [INFO][5190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.083 [WARNING][5190] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.083 [INFO][5190] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.088 [INFO][5190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.095763 containerd[1464]: 2026-01-24 00:31:03.091 [INFO][5182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.096451 containerd[1464]: time="2026-01-24T00:31:03.095846951Z" level=info msg="TearDown network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\" successfully" Jan 24 00:31:03.096451 containerd[1464]: time="2026-01-24T00:31:03.095883838Z" level=info msg="StopPodSandbox for \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\" returns successfully" Jan 24 00:31:03.096899 containerd[1464]: time="2026-01-24T00:31:03.096869104Z" level=info msg="RemovePodSandbox for \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\"" Jan 24 00:31:03.097015 containerd[1464]: time="2026-01-24T00:31:03.096903318Z" level=info msg="Forcibly stopping sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\"" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.145 [WARNING][5208] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"450f5953-3642-4725-b413-4ccd0b446f9a", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3952b89320faeb6892439f496f5b98f101a53d1cee5209c3c7a4b59a7e3b72", Pod:"calico-apiserver-6896bc5cbd-7zn99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0198d02e7e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.146 [INFO][5208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.146 [INFO][5208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" iface="eth0" netns="" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.146 [INFO][5208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.146 [INFO][5208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.203 [INFO][5216] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.204 [INFO][5216] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.204 [INFO][5216] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.213 [WARNING][5216] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.213 [INFO][5216] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" HandleID="k8s-pod-network.030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--7zn99-eth0" Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.216 [INFO][5216] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.221776 containerd[1464]: 2026-01-24 00:31:03.219 [INFO][5208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77" Jan 24 00:31:03.221776 containerd[1464]: time="2026-01-24T00:31:03.221686606Z" level=info msg="TearDown network for sandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\" successfully" Jan 24 00:31:03.236340 containerd[1464]: time="2026-01-24T00:31:03.236201211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:03.236340 containerd[1464]: time="2026-01-24T00:31:03.236310414Z" level=info msg="RemovePodSandbox \"030619adb19f1b67dee7086f138d3d163310673e89cdabfe298f94d45d3e4e77\" returns successfully" Jan 24 00:31:03.237115 containerd[1464]: time="2026-01-24T00:31:03.237062047Z" level=info msg="StopPodSandbox for \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\"" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.288 [WARNING][5235] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--frc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40661dc3-d91f-42a2-a397-77dbe1e37cee", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0", Pod:"csi-node-driver-frc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid467a2d6e80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.289 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.289 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" iface="eth0" netns="" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.289 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.289 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.322 [INFO][5244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.322 [INFO][5244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.323 [INFO][5244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.331 [WARNING][5244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.331 [INFO][5244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.334 [INFO][5244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.339813 containerd[1464]: 2026-01-24 00:31:03.337 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.340465 containerd[1464]: time="2026-01-24T00:31:03.339871225Z" level=info msg="TearDown network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\" successfully" Jan 24 00:31:03.340465 containerd[1464]: time="2026-01-24T00:31:03.339903174Z" level=info msg="StopPodSandbox for \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\" returns successfully" Jan 24 00:31:03.340802 containerd[1464]: time="2026-01-24T00:31:03.340695228Z" level=info msg="RemovePodSandbox for \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\"" Jan 24 00:31:03.340802 containerd[1464]: time="2026-01-24T00:31:03.340760499Z" level=info msg="Forcibly stopping sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\"" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.404 [WARNING][5261] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--frc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40661dc3-d91f-42a2-a397-77dbe1e37cee", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdef3e72c0d6bb7bd875317ede63b4126e49a810a58da4a4c0d28cf297289f0", Pod:"csi-node-driver-frc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid467a2d6e80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.405 [INFO][5261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.405 [INFO][5261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" iface="eth0" netns="" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.405 [INFO][5261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.405 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.431 [INFO][5270] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.431 [INFO][5270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.431 [INFO][5270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.438 [WARNING][5270] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.438 [INFO][5270] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" HandleID="k8s-pod-network.55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Workload="localhost-k8s-csi--node--driver--frc8p-eth0" Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.440 [INFO][5270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.446433 containerd[1464]: 2026-01-24 00:31:03.443 [INFO][5261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b" Jan 24 00:31:03.447075 containerd[1464]: time="2026-01-24T00:31:03.446476968Z" level=info msg="TearDown network for sandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\" successfully" Jan 24 00:31:03.456843 containerd[1464]: time="2026-01-24T00:31:03.456741056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:03.456843 containerd[1464]: time="2026-01-24T00:31:03.456828737Z" level=info msg="RemovePodSandbox \"55a72e07de520bdf89cd2bbc0f572e85428615e31757a1fc0c6b2923ed8be27b\" returns successfully" Jan 24 00:31:03.457471 containerd[1464]: time="2026-01-24T00:31:03.457391451Z" level=info msg="StopPodSandbox for \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\"" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.494 [WARNING][5289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ppq4k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db37e332-c065-4a9f-995f-7513b669f795", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810", Pod:"coredns-66bc5c9577-ppq4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid98fc985786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.495 [INFO][5289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.495 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" iface="eth0" netns="" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.495 [INFO][5289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.495 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.518 [INFO][5297] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.518 [INFO][5297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.518 [INFO][5297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.525 [WARNING][5297] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.525 [INFO][5297] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.526 [INFO][5297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.532106 containerd[1464]: 2026-01-24 00:31:03.529 [INFO][5289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.532106 containerd[1464]: time="2026-01-24T00:31:03.532022396Z" level=info msg="TearDown network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\" successfully" Jan 24 00:31:03.532106 containerd[1464]: time="2026-01-24T00:31:03.532056067Z" level=info msg="StopPodSandbox for \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\" returns successfully" Jan 24 00:31:03.533216 containerd[1464]: time="2026-01-24T00:31:03.533023237Z" level=info msg="RemovePodSandbox for \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\"" Jan 24 00:31:03.533216 containerd[1464]: time="2026-01-24T00:31:03.533090161Z" level=info msg="Forcibly stopping sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\"" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.581 [WARNING][5315] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ppq4k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db37e332-c065-4a9f-995f-7513b669f795", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3063120428a51835fb5c5de2553b9d331364be21911ad7c0dc486b06272d3810", Pod:"coredns-66bc5c9577-ppq4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid98fc985786", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.582 [INFO][5315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.582 [INFO][5315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" iface="eth0" netns="" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.582 [INFO][5315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.582 [INFO][5315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.606 [INFO][5324] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.606 [INFO][5324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.606 [INFO][5324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.614 [WARNING][5324] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.614 [INFO][5324] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" HandleID="k8s-pod-network.01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Workload="localhost-k8s-coredns--66bc5c9577--ppq4k-eth0" Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.617 [INFO][5324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.622380 containerd[1464]: 2026-01-24 00:31:03.620 [INFO][5315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f" Jan 24 00:31:03.623030 containerd[1464]: time="2026-01-24T00:31:03.622451767Z" level=info msg="TearDown network for sandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\" successfully" Jan 24 00:31:03.626783 containerd[1464]: time="2026-01-24T00:31:03.626664994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:03.626783 containerd[1464]: time="2026-01-24T00:31:03.626727200Z" level=info msg="RemovePodSandbox \"01c7e4f6372a64cbed9faa6a7cc866318116e9dd15585628fe4abf4eb139916f\" returns successfully" Jan 24 00:31:03.627439 containerd[1464]: time="2026-01-24T00:31:03.627271258Z" level=info msg="StopPodSandbox for \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\"" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.671 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tn4tb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5f9c89a8-9a30-4332-8094-e2b372cfff86", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c", Pod:"coredns-66bc5c9577-tn4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6f9f710a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.671 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.671 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" iface="eth0" netns="" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.671 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.671 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.696 [INFO][5352] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.696 [INFO][5352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.697 [INFO][5352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.705 [WARNING][5352] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.705 [INFO][5352] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.707 [INFO][5352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.712892 containerd[1464]: 2026-01-24 00:31:03.709 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.713812 containerd[1464]: time="2026-01-24T00:31:03.712944892Z" level=info msg="TearDown network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\" successfully" Jan 24 00:31:03.713812 containerd[1464]: time="2026-01-24T00:31:03.712986259Z" level=info msg="StopPodSandbox for \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\" returns successfully" Jan 24 00:31:03.713943 containerd[1464]: time="2026-01-24T00:31:03.713880676Z" level=info msg="RemovePodSandbox for \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\"" Jan 24 00:31:03.713973 containerd[1464]: time="2026-01-24T00:31:03.713943193Z" level=info msg="Forcibly stopping sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\"" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.754 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tn4tb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5f9c89a8-9a30-4332-8094-e2b372cfff86", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"804d23f7260bb081d50ed157044b08c33d4f4970eb131b3bdf67e07361e2003c", Pod:"coredns-66bc5c9577-tn4tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6f9f710a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.755 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.755 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" iface="eth0" netns="" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.755 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.755 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.787 [INFO][5379] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.787 [INFO][5379] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.787 [INFO][5379] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.795 [WARNING][5379] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.795 [INFO][5379] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" HandleID="k8s-pod-network.bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Workload="localhost-k8s-coredns--66bc5c9577--tn4tb-eth0" Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.798 [INFO][5379] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.804303 containerd[1464]: 2026-01-24 00:31:03.801 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566" Jan 24 00:31:03.805094 containerd[1464]: time="2026-01-24T00:31:03.804348418Z" level=info msg="TearDown network for sandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\" successfully" Jan 24 00:31:03.809292 containerd[1464]: time="2026-01-24T00:31:03.809213345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:03.809431 containerd[1464]: time="2026-01-24T00:31:03.809367230Z" level=info msg="RemovePodSandbox \"bd79128167a8c10ae63a8d145d2aa27c550bb6bbf366712630706659a3228566\" returns successfully" Jan 24 00:31:03.810208 containerd[1464]: time="2026-01-24T00:31:03.810177083Z" level=info msg="StopPodSandbox for \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\"" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.857 [WARNING][5398] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0", GenerateName:"calico-kube-controllers-dc5d5c67c-", Namespace:"calico-system", SelfLink:"", UID:"bba2b9d8-6e2d-4162-96fa-f25acd35c593", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc5d5c67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444", Pod:"calico-kube-controllers-dc5d5c67c-njvsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie64f790dbff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.858 [INFO][5398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.858 [INFO][5398] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" iface="eth0" netns="" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.858 [INFO][5398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.858 [INFO][5398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.882 [INFO][5406] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.882 [INFO][5406] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.882 [INFO][5406] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.891 [WARNING][5406] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.891 [INFO][5406] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.894 [INFO][5406] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.899717 containerd[1464]: 2026-01-24 00:31:03.896 [INFO][5398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.900194 containerd[1464]: time="2026-01-24T00:31:03.899764407Z" level=info msg="TearDown network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\" successfully" Jan 24 00:31:03.900194 containerd[1464]: time="2026-01-24T00:31:03.899791758Z" level=info msg="StopPodSandbox for \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\" returns successfully" Jan 24 00:31:03.900696 containerd[1464]: time="2026-01-24T00:31:03.900558876Z" level=info msg="RemovePodSandbox for \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\"" Jan 24 00:31:03.900696 containerd[1464]: time="2026-01-24T00:31:03.900691452Z" level=info msg="Forcibly stopping sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\"" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.941 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0", GenerateName:"calico-kube-controllers-dc5d5c67c-", Namespace:"calico-system", SelfLink:"", UID:"bba2b9d8-6e2d-4162-96fa-f25acd35c593", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc5d5c67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e5284188243274c25f26f4ad1dace605b34c0315c6e0e32f9114cb6e9dcf444", Pod:"calico-kube-controllers-dc5d5c67c-njvsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie64f790dbff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.942 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.942 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" iface="eth0" netns="" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.942 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.942 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.980 [INFO][5433] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.980 [INFO][5433] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.981 [INFO][5433] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.988 [WARNING][5433] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.988 [INFO][5433] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" HandleID="k8s-pod-network.7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Workload="localhost-k8s-calico--kube--controllers--dc5d5c67c--njvsm-eth0" Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.991 [INFO][5433] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:03.996526 containerd[1464]: 2026-01-24 00:31:03.994 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73" Jan 24 00:31:03.996946 containerd[1464]: time="2026-01-24T00:31:03.996555919Z" level=info msg="TearDown network for sandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\" successfully" Jan 24 00:31:04.000427 containerd[1464]: time="2026-01-24T00:31:04.000269531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:04.000427 containerd[1464]: time="2026-01-24T00:31:04.000325023Z" level=info msg="RemovePodSandbox \"7dabb42ad101d5f1ba02d9eae734efef75303a14a59a23c827ea6097961d6d73\" returns successfully" Jan 24 00:31:04.000875 containerd[1464]: time="2026-01-24T00:31:04.000841247Z" level=info msg="StopPodSandbox for \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\"" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.040 [WARNING][5451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"231ec5ca-b889-4270-8427-6227d170b1c8", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a", Pod:"calico-apiserver-6896bc5cbd-25rx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6798686840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.040 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.040 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" iface="eth0" netns="" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.040 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.040 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.076 [INFO][5460] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.077 [INFO][5460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.077 [INFO][5460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.085 [WARNING][5460] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.085 [INFO][5460] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.088 [INFO][5460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:04.094430 containerd[1464]: 2026-01-24 00:31:04.091 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.094430 containerd[1464]: time="2026-01-24T00:31:04.094344146Z" level=info msg="TearDown network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\" successfully" Jan 24 00:31:04.094430 containerd[1464]: time="2026-01-24T00:31:04.094385282Z" level=info msg="StopPodSandbox for \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\" returns successfully" Jan 24 00:31:04.097348 containerd[1464]: time="2026-01-24T00:31:04.097269909Z" level=info msg="RemovePodSandbox for \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\"" Jan 24 00:31:04.097348 containerd[1464]: time="2026-01-24T00:31:04.097338617Z" level=info msg="Forcibly stopping sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\"" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.161 [WARNING][5478] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0", GenerateName:"calico-apiserver-6896bc5cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"231ec5ca-b889-4270-8427-6227d170b1c8", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6896bc5cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a8447e0cfa2db962e668f07b1c58b44ed09e5a78963d0cad6f3840ba8ea7d8a", Pod:"calico-apiserver-6896bc5cbd-25rx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6798686840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.162 [INFO][5478] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.162 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" iface="eth0" netns="" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.162 [INFO][5478] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.162 [INFO][5478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.185 [INFO][5487] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.186 [INFO][5487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.186 [INFO][5487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.193 [WARNING][5487] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.193 [INFO][5487] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" HandleID="k8s-pod-network.a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Workload="localhost-k8s-calico--apiserver--6896bc5cbd--25rx6-eth0" Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.195 [INFO][5487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:04.202328 containerd[1464]: 2026-01-24 00:31:04.198 [INFO][5478] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83" Jan 24 00:31:04.202328 containerd[1464]: time="2026-01-24T00:31:04.202299571Z" level=info msg="TearDown network for sandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\" successfully" Jan 24 00:31:04.206936 containerd[1464]: time="2026-01-24T00:31:04.206857265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:04.206936 containerd[1464]: time="2026-01-24T00:31:04.206932073Z" level=info msg="RemovePodSandbox \"a3c8ba00c201e4757bca6fc37106d4657b4529edc1ac66775a2be5f89cb95d83\" returns successfully" Jan 24 00:31:07.584223 kubelet[2559]: E0124 00:31:07.584123 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:31:08.377415 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:47336.service - OpenSSH per-connection server daemon (10.0.0.1:47336). Jan 24 00:31:08.466509 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 47336 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:08.470954 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:08.478948 systemd-logind[1444]: New session 8 of user core. Jan 24 00:31:08.486142 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:31:08.597191 kubelet[2559]: E0124 00:31:08.597090 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:31:08.724559 sshd[5498]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:08.729840 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:31:08.731568 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:47336.service: Deactivated successfully. Jan 24 00:31:08.733796 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:31:08.735076 systemd-logind[1444]: Removed session 8. Jan 24 00:31:08.918430 systemd[1]: run-containerd-runc-k8s.io-f59f63ff9eef3742a5ee467649d29250afba85e615d9cd0db0d85e4622cce342-runc.ggwYjx.mount: Deactivated successfully. Jan 24 00:31:09.012039 kubelet[2559]: E0124 00:31:09.011840 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:31:09.591208 kubelet[2559]: E0124 00:31:09.591071 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:31:11.585508 kubelet[2559]: E0124 00:31:11.585301 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:31:12.584258 kubelet[2559]: E0124 00:31:12.584132 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:31:12.585523 kubelet[2559]: E0124 00:31:12.585242 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:31:13.736789 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:47338.service - OpenSSH per-connection server daemon (10.0.0.1:47338). Jan 24 00:31:13.807085 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 47338 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:13.809573 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:13.816578 systemd-logind[1444]: New session 9 of user core. Jan 24 00:31:13.826857 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:31:13.985453 sshd[5544]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:13.991037 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:47338.service: Deactivated successfully. Jan 24 00:31:13.993931 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:31:13.995031 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:31:13.996522 systemd-logind[1444]: Removed session 9. Jan 24 00:31:15.585855 kubelet[2559]: E0124 00:31:15.585488 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:31:18.675814 update_engine[1450]: I20260124 00:31:18.675684 1450 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 00:31:18.675814 update_engine[1450]: I20260124 00:31:18.675760 1450 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 00:31:18.676318 update_engine[1450]: I20260124 00:31:18.676256 1450 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 00:31:18.677015 update_engine[1450]: I20260124 00:31:18.676954 1450 omaha_request_params.cc:62] Current group set to lts Jan 24 00:31:18.677243 update_engine[1450]: I20260124 00:31:18.677184 1450 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 00:31:18.677243 update_engine[1450]: I20260124 00:31:18.677211 1450 update_attempter.cc:643] Scheduling an action processor start. Jan 24 00:31:18.677243 update_engine[1450]: I20260124 00:31:18.677233 1450 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:31:18.677332 update_engine[1450]: I20260124 00:31:18.677289 1450 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 00:31:18.677456 update_engine[1450]: I20260124 00:31:18.677371 1450 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:31:18.677456 update_engine[1450]: I20260124 00:31:18.677426 1450 omaha_request_action.cc:272] Request: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: Jan 24 00:31:18.677456 update_engine[1450]: I20260124 00:31:18.677453 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:31:18.683997 update_engine[1450]: I20260124 00:31:18.683806 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:31:18.684512 update_engine[1450]: I20260124 00:31:18.684314 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:31:18.684556 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 00:31:18.698642 update_engine[1450]: E20260124 00:31:18.698520 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:31:18.698745 update_engine[1450]: I20260124 00:31:18.698706 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 00:31:19.004222 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Jan 24 00:31:19.050108 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:19.052177 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:19.057899 systemd-logind[1444]: New session 10 of user core. Jan 24 00:31:19.064898 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:31:19.235558 sshd[5561]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:19.240434 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:58478.service: Deactivated successfully. Jan 24 00:31:19.242674 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:31:19.243512 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:31:19.245059 systemd-logind[1444]: Removed session 10. Jan 24 00:31:19.584644 containerd[1464]: time="2026-01-24T00:31:19.584346310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:31:19.662151 containerd[1464]: time="2026-01-24T00:31:19.662046494Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:19.663878 containerd[1464]: time="2026-01-24T00:31:19.663768407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:31:19.663878 containerd[1464]: time="2026-01-24T00:31:19.663877199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:31:19.664470 kubelet[2559]: E0124 00:31:19.664007 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:19.664470 kubelet[2559]: E0124 00:31:19.664051 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:19.664470 kubelet[2559]: E0124 00:31:19.664115 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:19.665856 containerd[1464]: time="2026-01-24T00:31:19.665716631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:31:19.727134 containerd[1464]: time="2026-01-24T00:31:19.727034128Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:19.728715 containerd[1464]: time="2026-01-24T00:31:19.728559805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:31:19.728715 containerd[1464]: time="2026-01-24T00:31:19.728669588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:31:19.729054 kubelet[2559]: E0124 00:31:19.728971 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:19.729054 kubelet[2559]: E0124 00:31:19.729035 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:19.729285 kubelet[2559]: E0124 00:31:19.729159 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:19.729285 kubelet[2559]: E0124 00:31:19.729246 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:31:20.585217 containerd[1464]: time="2026-01-24T00:31:20.585170174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:20.650062 containerd[1464]: time="2026-01-24T00:31:20.649328614Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:20.652294 containerd[1464]: time="2026-01-24T00:31:20.652175789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:20.652468 containerd[1464]: time="2026-01-24T00:31:20.652258923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:20.652775 kubelet[2559]: E0124 00:31:20.652661 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:20.652775 kubelet[2559]: E0124 00:31:20.652747 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:20.652904 kubelet[2559]: E0124 00:31:20.652875 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-25rx6_calico-apiserver(231ec5ca-b889-4270-8427-6227d170b1c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:20.652945 kubelet[2559]: E0124 00:31:20.652922 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:31:22.584953 containerd[1464]: time="2026-01-24T00:31:22.584682360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:31:22.655342 containerd[1464]: time="2026-01-24T00:31:22.654169713Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:22.657525 containerd[1464]: time="2026-01-24T00:31:22.657351494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:31:22.658026 containerd[1464]: time="2026-01-24T00:31:22.657854090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:22.658073 kubelet[2559]: E0124 00:31:22.657895 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:22.658073 kubelet[2559]: E0124 00:31:22.657956 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:22.658525 kubelet[2559]: E0124 00:31:22.658072 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xsh9r_calico-system(9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:22.658525 kubelet[2559]: E0124 00:31:22.658117 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:31:23.584824 containerd[1464]: time="2026-01-24T00:31:23.584755694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:31:23.650717 containerd[1464]: time="2026-01-24T00:31:23.647988360Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:23.653570 containerd[1464]: time="2026-01-24T00:31:23.653030375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:31:23.653570 containerd[1464]: time="2026-01-24T00:31:23.653130331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:31:23.667323 kubelet[2559]: E0124 00:31:23.654035 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:23.667323 kubelet[2559]: E0124 00:31:23.654100 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:23.667323 kubelet[2559]: E0124 00:31:23.654275 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-dc5d5c67c-njvsm_calico-system(bba2b9d8-6e2d-4162-96fa-f25acd35c593): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:23.667323 kubelet[2559]: E0124 00:31:23.654315 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:31:24.271148 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:58486.service - OpenSSH per-connection server daemon (10.0.0.1:58486). Jan 24 00:31:24.326021 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 58486 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:24.328485 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:24.335016 systemd-logind[1444]: New session 11 of user core. Jan 24 00:31:24.351026 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:31:24.540457 sshd[5583]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:24.548045 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:58486.service: Deactivated successfully. Jan 24 00:31:24.553973 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:31:24.556513 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:31:24.559306 systemd-logind[1444]: Removed session 11. Jan 24 00:31:24.587362 containerd[1464]: time="2026-01-24T00:31:24.587101770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:31:24.665829 containerd[1464]: time="2026-01-24T00:31:24.665717768Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:24.667899 containerd[1464]: time="2026-01-24T00:31:24.667823007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:31:24.667899 containerd[1464]: time="2026-01-24T00:31:24.667859162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:31:24.668150 kubelet[2559]: E0124 00:31:24.668114 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:24.668790 kubelet[2559]: E0124 00:31:24.668156 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:24.668790 kubelet[2559]: E0124 00:31:24.668248 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:24.669654 containerd[1464]: time="2026-01-24T00:31:24.669365417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:31:24.739930 containerd[1464]: time="2026-01-24T00:31:24.739798842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:24.741547 containerd[1464]: time="2026-01-24T00:31:24.741388501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:31:24.741547 containerd[1464]: time="2026-01-24T00:31:24.741487452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:31:24.742029 kubelet[2559]: E0124 00:31:24.741923 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:24.742029 kubelet[2559]: E0124 00:31:24.742005 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:24.742210 kubelet[2559]: E0124 00:31:24.742130 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:24.742304 kubelet[2559]: E0124 00:31:24.742207 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:31:26.584997 containerd[1464]: time="2026-01-24T00:31:26.584863009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:26.654197 containerd[1464]: time="2026-01-24T00:31:26.653960478Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:26.656308 containerd[1464]: time="2026-01-24T00:31:26.656266755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:26.658198 containerd[1464]: time="2026-01-24T00:31:26.656373293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:26.658300 kubelet[2559]: E0124 00:31:26.657217 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:26.658300 kubelet[2559]: E0124 00:31:26.657281 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:26.658300 kubelet[2559]: E0124 00:31:26.657369 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-7zn99_calico-apiserver(450f5953-3642-4725-b413-4ccd0b446f9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:26.658300 kubelet[2559]: E0124 00:31:26.657413 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:31:28.585643 kubelet[2559]: E0124 00:31:28.584651 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:31:28.658345 update_engine[1450]: I20260124 00:31:28.658140 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:31:28.659026 update_engine[1450]: I20260124 00:31:28.658725 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:31:28.659126 update_engine[1450]: I20260124 00:31:28.659067 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:31:28.676186 update_engine[1450]: E20260124 00:31:28.676004 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:31:28.676186 update_engine[1450]: I20260124 00:31:28.676165 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 00:31:29.573321 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:58546.service - OpenSSH per-connection server daemon (10.0.0.1:58546). Jan 24 00:31:29.652961 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 58546 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:29.656129 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:29.665509 systemd-logind[1444]: New session 12 of user core. Jan 24 00:31:29.675900 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:31:29.897651 sshd[5602]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:29.915813 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:58546.service: Deactivated successfully. Jan 24 00:31:29.918814 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:31:29.922431 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:31:29.929673 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:58558.service - OpenSSH per-connection server daemon (10.0.0.1:58558). Jan 24 00:31:29.932315 systemd-logind[1444]: Removed session 12. Jan 24 00:31:29.997957 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 58558 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:30.001088 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:30.025716 systemd-logind[1444]: New session 13 of user core. Jan 24 00:31:30.035242 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:31:30.312785 sshd[5617]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:30.322312 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:58558.service: Deactivated successfully. Jan 24 00:31:30.326558 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:31:30.330206 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:31:30.352828 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:58566.service - OpenSSH per-connection server daemon (10.0.0.1:58566). Jan 24 00:31:30.356695 systemd-logind[1444]: Removed session 13. Jan 24 00:31:30.405700 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 58566 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:30.409085 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:30.420172 systemd-logind[1444]: New session 14 of user core. Jan 24 00:31:30.426933 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:31:30.592258 sshd[5629]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:30.597571 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:58566.service: Deactivated successfully. Jan 24 00:31:30.600092 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:31:30.601227 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:31:30.603021 systemd-logind[1444]: Removed session 14. Jan 24 00:31:32.583282 kubelet[2559]: E0124 00:31:32.583191 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:31:33.585422 kubelet[2559]: E0124 00:31:33.585282 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:31:34.582863 kubelet[2559]: E0124 00:31:34.582732 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:31:34.587924 kubelet[2559]: E0124 00:31:34.587777 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:31:35.584288 kubelet[2559]: E0124 00:31:35.584012 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:31:35.612365 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:57848.service - OpenSSH per-connection server daemon (10.0.0.1:57848). Jan 24 00:31:35.650748 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 57848 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:35.654352 sshd[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:35.661621 systemd-logind[1444]: New session 15 of user core. Jan 24 00:31:35.669933 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:31:35.833433 sshd[5643]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:35.838293 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:57848.service: Deactivated successfully. Jan 24 00:31:35.841210 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:31:35.842539 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:31:35.843986 systemd-logind[1444]: Removed session 15. Jan 24 00:31:37.584090 kubelet[2559]: E0124 00:31:37.584043 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:31:37.585093 kubelet[2559]: E0124 00:31:37.584089 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:31:37.585198 kubelet[2559]: E0124 00:31:37.585131 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:31:38.655155 update_engine[1450]: I20260124 00:31:38.654973 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:31:38.655849 update_engine[1450]: I20260124 00:31:38.655577 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:31:38.656028 update_engine[1450]: I20260124 00:31:38.655951 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:31:38.672816 update_engine[1450]: E20260124 00:31:38.672561 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:31:38.672816 update_engine[1450]: I20260124 00:31:38.672796 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 00:31:40.862772 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:57858.service - OpenSSH per-connection server daemon (10.0.0.1:57858). Jan 24 00:31:40.901672 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 57858 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:40.903806 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:40.910162 systemd-logind[1444]: New session 16 of user core. Jan 24 00:31:40.919147 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:31:41.201368 sshd[5683]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:41.207278 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:57858.service: Deactivated successfully. Jan 24 00:31:41.210132 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:31:41.211486 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:31:41.213205 systemd-logind[1444]: Removed session 16. Jan 24 00:31:46.217540 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:46356.service - OpenSSH per-connection server daemon (10.0.0.1:46356). Jan 24 00:31:46.260287 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 46356 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:46.262326 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:46.267065 systemd-logind[1444]: New session 17 of user core. Jan 24 00:31:46.274851 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:31:46.399399 sshd[5703]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:46.404101 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:46356.service: Deactivated successfully. Jan 24 00:31:46.406362 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:31:46.407519 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:31:46.409290 systemd-logind[1444]: Removed session 17. Jan 24 00:31:47.584152 kubelet[2559]: E0124 00:31:47.584039 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:31:48.584407 kubelet[2559]: E0124 00:31:48.584186 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:31:48.585086 kubelet[2559]: E0124 00:31:48.584973 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:31:48.656557 update_engine[1450]: I20260124 00:31:48.656414 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:31:48.657103 update_engine[1450]: I20260124 00:31:48.656977 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:31:48.657333 update_engine[1450]: I20260124 00:31:48.657225 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:31:48.674279 update_engine[1450]: E20260124 00:31:48.674134 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:31:48.674279 update_engine[1450]: I20260124 00:31:48.674247 1450 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:31:48.674279 update_engine[1450]: I20260124 00:31:48.674259 1450 omaha_request_action.cc:617] Omaha request response: Jan 24 00:31:48.674515 update_engine[1450]: E20260124 00:31:48.674361 1450 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 00:31:48.677191 update_engine[1450]: I20260124 00:31:48.677102 1450 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 00:31:48.677191 update_engine[1450]: I20260124 00:31:48.677143 1450 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:31:48.677191 update_engine[1450]: I20260124 00:31:48.677153 1450 update_attempter.cc:306] Processing Done. Jan 24 00:31:48.677191 update_engine[1450]: E20260124 00:31:48.677171 1450 update_attempter.cc:619] Update failed. Jan 24 00:31:48.677191 update_engine[1450]: I20260124 00:31:48.677181 1450 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 00:31:48.677191 update_engine[1450]: I20260124 00:31:48.677187 1450 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 00:31:48.677191 update_engine[1450]: I20260124 00:31:48.677194 1450 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 00:31:48.677436 update_engine[1450]: I20260124 00:31:48.677281 1450 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:31:48.677436 update_engine[1450]: I20260124 00:31:48.677309 1450 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:31:48.677436 update_engine[1450]: I20260124 00:31:48.677317 1450 omaha_request_action.cc:272] Request: Jan 24 00:31:48.677436 update_engine[1450]: Jan 24 00:31:48.677436 update_engine[1450]: Jan 24 00:31:48.677436 update_engine[1450]: Jan 24 00:31:48.677436 update_engine[1450]: Jan 24 00:31:48.677436 update_engine[1450]: Jan 24 00:31:48.677436 update_engine[1450]: Jan 24 00:31:48.677436 update_engine[1450]: I20260124 00:31:48.677325 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:31:48.677759 update_engine[1450]: I20260124 00:31:48.677542 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:31:48.677889 update_engine[1450]: I20260124 00:31:48.677821 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:31:48.678036 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 00:31:48.696142 update_engine[1450]: E20260124 00:31:48.696005 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:31:48.696142 update_engine[1450]: I20260124 00:31:48.696139 1450 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:31:48.696319 update_engine[1450]: I20260124 00:31:48.696156 1450 omaha_request_action.cc:617] Omaha request response: Jan 24 00:31:48.696319 update_engine[1450]: I20260124 00:31:48.696169 1450 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:31:48.696319 update_engine[1450]: I20260124 00:31:48.696178 1450 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:31:48.696319 update_engine[1450]: I20260124 00:31:48.696188 1450 update_attempter.cc:306] Processing Done. Jan 24 00:31:48.696319 update_engine[1450]: I20260124 00:31:48.696199 1450 update_attempter.cc:310] Error event sent. Jan 24 00:31:48.696319 update_engine[1450]: I20260124 00:31:48.696215 1450 update_check_scheduler.cc:74] Next update check in 42m7s Jan 24 00:31:48.697018 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 24 00:31:49.583418 kubelet[2559]: E0124 00:31:49.583308 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:31:49.585678 kubelet[2559]: E0124 00:31:49.585462 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:31:50.585579 kubelet[2559]: E0124 00:31:50.585333 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:31:50.585579 kubelet[2559]: E0124 00:31:50.585333 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:31:51.413748 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). Jan 24 00:31:51.479702 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:51.482743 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:51.496798 systemd-logind[1444]: New session 18 of user core. Jan 24 00:31:51.508832 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:31:51.677982 sshd[5717]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:51.692153 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:46358.service: Deactivated successfully. Jan 24 00:31:51.695763 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:31:51.698886 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:31:51.707351 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:46374.service - OpenSSH per-connection server daemon (10.0.0.1:46374). Jan 24 00:31:51.709258 systemd-logind[1444]: Removed session 18. Jan 24 00:31:51.756310 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 46374 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:51.758332 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:51.765403 systemd-logind[1444]: New session 19 of user core. Jan 24 00:31:51.774014 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:31:52.171962 sshd[5732]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:52.183837 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:46374.service: Deactivated successfully. Jan 24 00:31:52.187141 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:31:52.189823 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:31:52.199370 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:46390.service - OpenSSH per-connection server daemon (10.0.0.1:46390). Jan 24 00:31:52.201711 systemd-logind[1444]: Removed session 19. Jan 24 00:31:52.246747 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 46390 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:52.249478 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:52.256863 systemd-logind[1444]: New session 20 of user core. Jan 24 00:31:52.268125 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:31:52.951005 sshd[5745]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:52.963763 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:46390.service: Deactivated successfully. Jan 24 00:31:52.965898 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:31:52.971046 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:31:52.979367 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:46394.service - OpenSSH per-connection server daemon (10.0.0.1:46394). Jan 24 00:31:52.984780 systemd-logind[1444]: Removed session 20. Jan 24 00:31:53.043562 sshd[5763]: Accepted publickey for core from 10.0.0.1 port 46394 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:53.046151 sshd[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:53.054876 systemd-logind[1444]: New session 21 of user core. Jan 24 00:31:53.063019 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:31:53.413813 sshd[5763]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:53.433828 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:46394.service: Deactivated successfully. Jan 24 00:31:53.438499 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:31:53.441121 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:31:53.462188 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:46396.service - OpenSSH per-connection server daemon (10.0.0.1:46396). Jan 24 00:31:53.464742 systemd-logind[1444]: Removed session 21. Jan 24 00:31:53.528508 sshd[5776]: Accepted publickey for core from 10.0.0.1 port 46396 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:53.530872 sshd[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:53.537563 systemd-logind[1444]: New session 22 of user core. Jan 24 00:31:53.546081 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:31:53.846861 sshd[5776]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:53.855456 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:46396.service: Deactivated successfully. Jan 24 00:31:53.860136 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:31:53.863027 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:31:53.872479 systemd-logind[1444]: Removed session 22. Jan 24 00:31:58.865252 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:54046.service - OpenSSH per-connection server daemon (10.0.0.1:54046). Jan 24 00:31:58.923994 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 54046 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:31:58.926869 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:58.933813 systemd-logind[1444]: New session 23 of user core. Jan 24 00:31:58.944884 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:31:59.102949 sshd[5792]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:59.113100 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:54046.service: Deactivated successfully. Jan 24 00:31:59.118273 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:31:59.121851 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:31:59.125351 systemd-logind[1444]: Removed session 23. Jan 24 00:32:00.583049 kubelet[2559]: E0124 00:32:00.583011 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:32:00.585110 kubelet[2559]: E0124 00:32:00.584180 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:32:01.585372 containerd[1464]: time="2026-01-24T00:32:01.585047670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:32:01.652284 containerd[1464]: time="2026-01-24T00:32:01.652141210Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:01.655389 containerd[1464]: time="2026-01-24T00:32:01.655131486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:32:01.655389 containerd[1464]: time="2026-01-24T00:32:01.655221014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:32:01.655960 kubelet[2559]: E0124 00:32:01.655515 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:01.655960 kubelet[2559]: E0124 00:32:01.655574 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:01.656437 kubelet[2559]: E0124 00:32:01.655998 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-25rx6_calico-apiserver(231ec5ca-b889-4270-8427-6227d170b1c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:01.656437 kubelet[2559]: E0124 00:32:01.656039 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:32:01.657322 containerd[1464]: time="2026-01-24T00:32:01.656268059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:32:01.751085 containerd[1464]: time="2026-01-24T00:32:01.750947511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:01.753490 containerd[1464]: time="2026-01-24T00:32:01.753349258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:32:01.753490 containerd[1464]: time="2026-01-24T00:32:01.753431400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:32:01.754344 kubelet[2559]: E0124 00:32:01.753849 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:32:01.754344 kubelet[2559]: E0124 00:32:01.753901 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:32:01.754344 kubelet[2559]: E0124 00:32:01.753966 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:01.757511 containerd[1464]: time="2026-01-24T00:32:01.757224956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:32:01.841766 containerd[1464]: time="2026-01-24T00:32:01.841388231Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:01.843699 containerd[1464]: time="2026-01-24T00:32:01.843513064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:32:01.843894 containerd[1464]: time="2026-01-24T00:32:01.843622454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:32:01.844100 kubelet[2559]: E0124 00:32:01.844014 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:32:01.844100 kubelet[2559]: E0124 00:32:01.844076 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:32:01.844224 kubelet[2559]: E0124 00:32:01.844158 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-558b4bdd9c-hsrgc_calico-system(c7a745d0-7ac5-41d7-b9e1-a1e923945f5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:01.844262 kubelet[2559]: E0124 00:32:01.844226 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:32:03.583959 kubelet[2559]: E0124 00:32:03.583491 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:32:03.584560 kubelet[2559]: E0124 00:32:03.584365 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:32:03.586664 kubelet[2559]: E0124 00:32:03.586374 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:32:04.130029 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:54058.service - OpenSSH per-connection server daemon (10.0.0.1:54058). Jan 24 00:32:04.176279 sshd[5814]: Accepted publickey for core from 10.0.0.1 port 54058 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:32:04.178685 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:04.185146 systemd-logind[1444]: New session 24 of user core. Jan 24 00:32:04.196939 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:32:04.342353 sshd[5814]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:04.346773 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:54058.service: Deactivated successfully. Jan 24 00:32:04.356904 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:32:04.358100 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:32:04.359574 systemd-logind[1444]: Removed session 24. Jan 24 00:32:04.584080 containerd[1464]: time="2026-01-24T00:32:04.584033063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:32:04.677548 containerd[1464]: time="2026-01-24T00:32:04.677451610Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:04.679102 containerd[1464]: time="2026-01-24T00:32:04.679014353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:32:04.679161 containerd[1464]: time="2026-01-24T00:32:04.679067813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:32:04.679449 kubelet[2559]: E0124 00:32:04.679371 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:32:04.679449 kubelet[2559]: E0124 00:32:04.679438 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:32:04.680135 kubelet[2559]: E0124 00:32:04.679557 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-dc5d5c67c-njvsm_calico-system(bba2b9d8-6e2d-4162-96fa-f25acd35c593): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:04.680135 kubelet[2559]: E0124 00:32:04.679664 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:32:09.358263 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:37196.service - OpenSSH per-connection server daemon (10.0.0.1:37196). Jan 24 00:32:09.406002 sshd[5854]: Accepted publickey for core from 10.0.0.1 port 37196 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:32:09.407946 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:09.416239 systemd-logind[1444]: New session 25 of user core. Jan 24 00:32:09.423875 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:32:09.585065 sshd[5854]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:09.590099 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:37196.service: Deactivated successfully. Jan 24 00:32:09.592651 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:32:09.593685 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:32:09.595765 systemd-logind[1444]: Removed session 25. Jan 24 00:32:14.584392 kubelet[2559]: E0124 00:32:14.584244 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-25rx6" podUID="231ec5ca-b889-4270-8427-6227d170b1c8" Jan 24 00:32:14.611104 systemd[1]: Started sshd@25-10.0.0.57:22-10.0.0.1:36292.service - OpenSSH per-connection server daemon (10.0.0.1:36292). Jan 24 00:32:14.654264 sshd[5872]: Accepted publickey for core from 10.0.0.1 port 36292 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:32:14.656920 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:14.663295 systemd-logind[1444]: New session 26 of user core. Jan 24 00:32:14.669930 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:32:14.813680 sshd[5872]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:14.818411 systemd[1]: sshd@25-10.0.0.57:22-10.0.0.1:36292.service: Deactivated successfully. Jan 24 00:32:14.821543 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:32:14.824463 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:32:14.826375 systemd-logind[1444]: Removed session 26. Jan 24 00:32:15.586423 containerd[1464]: time="2026-01-24T00:32:15.585930911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:32:15.676125 containerd[1464]: time="2026-01-24T00:32:15.676053429Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:15.677330 containerd[1464]: time="2026-01-24T00:32:15.677229116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:32:15.677408 containerd[1464]: time="2026-01-24T00:32:15.677320326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:32:15.677708 kubelet[2559]: E0124 00:32:15.677634 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:32:15.677708 kubelet[2559]: E0124 00:32:15.677692 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:32:15.678725 kubelet[2559]: E0124 00:32:15.678655 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xsh9r_calico-system(9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:15.678725 kubelet[2559]: E0124 00:32:15.678694 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xsh9r" podUID="9dbd8ba0-6eab-49ff-9cd2-d46a7d492e06" Jan 24 00:32:16.591059 containerd[1464]: time="2026-01-24T00:32:16.591016006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:32:16.592112 kubelet[2559]: E0124 00:32:16.591313 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b4bdd9c-hsrgc" podUID="c7a745d0-7ac5-41d7-b9e1-a1e923945f5c" Jan 24 00:32:16.666736 containerd[1464]: time="2026-01-24T00:32:16.666639950Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:16.668525 containerd[1464]: time="2026-01-24T00:32:16.668405990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:32:16.668675 containerd[1464]: time="2026-01-24T00:32:16.668542585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:32:16.669269 kubelet[2559]: E0124 00:32:16.669136 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:16.669269 kubelet[2559]: E0124 00:32:16.669188 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:16.669381 kubelet[2559]: E0124 00:32:16.669274 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6896bc5cbd-7zn99_calico-apiserver(450f5953-3642-4725-b413-4ccd0b446f9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:16.669381 kubelet[2559]: E0124 00:32:16.669306 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6896bc5cbd-7zn99" podUID="450f5953-3642-4725-b413-4ccd0b446f9a" Jan 24 00:32:17.585517 containerd[1464]: time="2026-01-24T00:32:17.585241457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:32:17.656528 containerd[1464]: time="2026-01-24T00:32:17.656409946Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:17.658557 containerd[1464]: time="2026-01-24T00:32:17.658342407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:32:17.658557 containerd[1464]: time="2026-01-24T00:32:17.658451151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:32:17.658813 kubelet[2559]: E0124 00:32:17.658747 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:32:17.659145 kubelet[2559]: E0124 00:32:17.658825 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:32:17.659145 kubelet[2559]: E0124 00:32:17.658951 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:17.662553 containerd[1464]: time="2026-01-24T00:32:17.662224844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:32:17.723567 containerd[1464]: time="2026-01-24T00:32:17.723495693Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:17.725477 containerd[1464]: time="2026-01-24T00:32:17.725406389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:32:17.725639 containerd[1464]: time="2026-01-24T00:32:17.725510854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:32:17.726312 kubelet[2559]: E0124 00:32:17.725817 2559 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:32:17.726312 kubelet[2559]: E0124 00:32:17.725877 2559 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:32:17.726312 kubelet[2559]: E0124 00:32:17.726009 2559 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-frc8p_calico-system(40661dc3-d91f-42a2-a397-77dbe1e37cee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:17.726566 kubelet[2559]: E0124 00:32:17.726065 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frc8p" podUID="40661dc3-d91f-42a2-a397-77dbe1e37cee" Jan 24 00:32:18.584691 kubelet[2559]: E0124 00:32:18.584467 2559 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc5d5c67c-njvsm" podUID="bba2b9d8-6e2d-4162-96fa-f25acd35c593" Jan 24 00:32:19.838270 systemd[1]: Started sshd@26-10.0.0.57:22-10.0.0.1:36306.service - OpenSSH per-connection server daemon (10.0.0.1:36306). Jan 24 00:32:19.921534 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 36306 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:32:19.923787 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:19.931766 systemd-logind[1444]: New session 27 of user core. Jan 24 00:32:19.938999 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:32:20.135722 sshd[5907]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:20.142112 systemd[1]: sshd@26-10.0.0.57:22-10.0.0.1:36306.service: Deactivated successfully. Jan 24 00:32:20.145461 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:32:20.147356 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:32:20.149509 systemd-logind[1444]: Removed session 27. Jan 24 00:32:20.585634 kubelet[2559]: E0124 00:32:20.585495 2559 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"