Jan 17 00:40:11.628050 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:40:11.628139 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:40:11.628164 kernel: BIOS-provided physical RAM map: Jan 17 00:40:11.628175 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:40:11.628186 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:40:11.628196 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:40:11.628208 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:40:11.628217 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:40:11.628225 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:40:11.628233 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:40:11.628247 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:40:11.628256 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:40:11.628290 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:40:11.628336 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:40:11.628371 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:40:11.628382 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:40:11.628400 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:40:11.628410 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:40:11.628420 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:40:11.628432 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:40:11.628443 kernel: NX (Execute Disable) protection: active Jan 17 00:40:11.628452 kernel: APIC: Static calls initialized Jan 17 00:40:11.628592 kernel: efi: EFI v2.7 by EDK II Jan 17 00:40:11.628674 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:40:11.628687 kernel: SMBIOS 2.8 present. Jan 17 00:40:11.628760 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:40:11.628774 kernel: Hypervisor detected: KVM Jan 17 00:40:11.628793 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:40:11.628802 kernel: kvm-clock: using sched offset of 12172762898 cycles Jan 17 00:40:11.628815 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:40:11.628825 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:40:11.628837 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:40:11.628847 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:40:11.628860 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:40:11.628870 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:40:11.628880 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:40:11.628896 kernel: Using GB pages for direct mapping Jan 17 00:40:11.628907 kernel: Secure boot disabled Jan 17 00:40:11.628920 kernel: ACPI: Early table checksum verification disabled Jan 17 00:40:11.628932 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:40:11.628950 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:40:11.628962 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:40:11.628973 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:40:11.628989 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:40:11.629000 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:40:11.629037 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:40:11.629049 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:40:11.629059 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:40:11.629070 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:40:11.629081 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:40:11.629146 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:40:11.629159 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:40:11.629169 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:40:11.629179 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:40:11.629190 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:40:11.629201 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:40:11.629212 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:40:11.629224 kernel: No NUMA configuration found Jan 17 00:40:11.629260 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:40:11.629281 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:40:11.629343 kernel: Zone ranges: Jan 17 00:40:11.629358 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:40:11.629368 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:40:11.629380 kernel: Normal empty Jan 17 00:40:11.629393 kernel: Movable zone start for each node Jan 17 00:40:11.629403 kernel: Early memory node ranges Jan 17 00:40:11.629415 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:40:11.629426 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:40:11.629445 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:40:11.629483 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:40:11.629496 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:40:11.629507 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:40:11.629542 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:40:11.629556 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:40:11.629566 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:40:11.629578 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:40:11.629590 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:40:11.629600 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:40:11.629620 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:40:11.629630 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:40:11.629642 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:40:11.629652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:40:11.629663 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:40:11.629675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:40:11.629687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:40:11.629699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:40:11.629710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:40:11.629897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:40:11.629911 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:40:11.629923 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:40:11.629934 kernel: TSC deadline timer available Jan 17 00:40:11.629947 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:40:11.629957 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:40:11.629970 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:40:11.629980 kernel: kvm-guest: setup PV sched yield Jan 17 00:40:11.629992 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:40:11.630391 kernel: Booting paravirtualized kernel on KVM Jan 17 00:40:11.630414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:40:11.630429 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:40:11.630440 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:40:11.630452 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:40:11.630464 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:40:11.630476 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:40:11.630488 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:40:11.630502 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:40:11.630547 kernel: random: crng init done Jan 17 00:40:11.630561 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:40:11.630573 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:40:11.630584 kernel: Fallback order for Node 0: 0 Jan 17 00:40:11.630596 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:40:11.630609 kernel: Policy zone: DMA32 Jan 17 00:40:11.630619 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:40:11.630633 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:40:11.630652 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:40:11.630664 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:40:11.630677 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:40:11.630688 kernel: Dynamic Preempt: voluntary Jan 17 00:40:11.630700 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:40:11.630735 kernel: rcu: RCU event tracing is enabled. Jan 17 00:40:11.630753 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:40:11.630764 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:40:11.630777 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:40:11.630789 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:40:11.630802 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:40:11.630814 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:40:11.630833 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:40:11.630846 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:40:11.630859 kernel: Console: colour dummy device 80x25 Jan 17 00:40:11.630869 kernel: printk: console [ttyS0] enabled Jan 17 00:40:11.630907 kernel: ACPI: Core revision 20230628 Jan 17 00:40:11.630926 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:40:11.630938 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:40:11.630952 kernel: x2apic enabled Jan 17 00:40:11.630964 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:40:11.630977 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:40:11.630989 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:40:11.631000 kernel: kvm-guest: setup PV IPIs Jan 17 00:40:11.631013 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:40:11.631025 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:40:11.631044 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:40:11.631057 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:40:11.631068 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:40:11.631081 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:40:11.631262 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:40:11.631277 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:40:11.631290 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:40:11.631331 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:40:11.631350 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:40:11.631364 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:40:11.631376 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:40:11.631390 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:40:11.631401 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:40:11.631438 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:40:11.631453 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:40:11.631466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:40:11.631477 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:40:11.631497 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:40:11.631508 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:40:11.631520 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:40:11.631533 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:40:11.631544 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:40:11.631558 kernel: landlock: Up and running. Jan 17 00:40:11.631569 kernel: SELinux: Initializing. Jan 17 00:40:11.631582 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:40:11.631594 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:40:11.631615 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:40:11.631628 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:40:11.631642 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:40:11.631652 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:40:11.631666 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:40:11.631677 kernel: signal: max sigframe size: 1776 Jan 17 00:40:11.631690 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:40:11.631702 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:40:11.631720 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:40:11.631733 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:40:11.631744 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:40:11.631757 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:40:11.631770 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:40:11.631781 kernel: smpboot: Max logical packages: 1 Jan 17 00:40:11.631792 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:40:11.631804 kernel: devtmpfs: initialized Jan 17 00:40:11.631817 kernel: x86/mm: Memory block size: 128MB Jan 17 00:40:11.631828 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:40:11.631847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:40:11.631860 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:40:11.631871 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:40:11.631885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:40:11.631896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:40:11.631909 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:40:11.631921 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:40:11.631933 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:40:11.631951 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:40:11.631964 kernel: audit: type=2000 audit(1768610407.492:1): state=initialized audit_enabled=0 res=1 Jan 17 00:40:11.631977 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:40:11.631989 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:40:11.632002 kernel: cpuidle: using governor menu Jan 17 00:40:11.632015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:40:11.632028 kernel: dca service started, version 1.12.1 Jan 17 00:40:11.632073 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:40:11.632130 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:40:11.632152 kernel: PCI: Using configuration type 1 for base access Jan 17 00:40:11.632165 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:40:11.632178 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:40:11.632191 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:40:11.632203 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:40:11.632216 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:40:11.632229 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:40:11.632240 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:40:11.632254 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:40:11.632272 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:40:11.632285 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:40:11.632324 kernel: ACPI: Interpreter enabled Jan 17 00:40:11.632364 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:40:11.632378 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:40:11.632389 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:40:11.632403 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:40:11.632414 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:40:11.632427 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:40:11.633049 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:40:11.633390 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:40:11.633625 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:40:11.633646 kernel: PCI host bridge to bus 0000:00 Jan 17 00:40:11.633959 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:40:11.634323 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:40:11.634540 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:40:11.634764 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:40:11.634975 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:40:11.635253 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:40:11.635523 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:40:11.635854 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:40:11.636226 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:40:11.636509 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:40:11.636770 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:40:11.636997 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:40:11.638216 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:40:11.638455 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:40:11.638743 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:40:11.638964 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:40:11.639244 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:40:11.639653 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:40:11.640018 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:40:11.640381 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:40:11.640599 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:40:11.640827 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:40:11.641152 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:40:11.641401 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:40:11.641650 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:40:11.641840 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:40:11.642029 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:40:11.642379 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:40:11.642607 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:40:11.642902 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:40:11.643194 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:40:11.643485 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:40:11.643833 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:40:11.644135 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:40:11.644157 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:40:11.644171 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:40:11.644184 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:40:11.644204 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:40:11.644217 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:40:11.644230 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:40:11.644241 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:40:11.644254 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:40:11.644267 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:40:11.644278 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:40:11.644289 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:40:11.644335 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:40:11.644355 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:40:11.644368 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:40:11.644381 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:40:11.644393 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:40:11.644405 kernel: iommu: Default domain type: Translated Jan 17 00:40:11.644418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:40:11.644429 kernel: efivars: Registered efivars operations Jan 17 00:40:11.644441 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:40:11.644452 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:40:11.644469 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:40:11.644482 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:40:11.644493 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:40:11.644504 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:40:11.644731 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:40:11.644956 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:40:11.645229 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:40:11.645251 kernel: vgaarb: loaded Jan 17 00:40:11.645272 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:40:11.645285 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:40:11.645332 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:40:11.645346 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:40:11.645359 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:40:11.645372 kernel: pnp: PnP ACPI init Jan 17 00:40:11.645709 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:40:11.645729 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:40:11.645744 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:40:11.645764 kernel: NET: Registered PF_INET protocol family Jan 17 00:40:11.645777 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:40:11.645790 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:40:11.645803 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:40:11.645814 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:40:11.645827 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:40:11.645839 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:40:11.645852 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:40:11.645871 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:40:11.645883 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:40:11.645896 kernel: NET: Registered PF_XDP protocol family Jan 17 00:40:11.646218 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:40:11.646489 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:40:11.646704 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:40:11.646997 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:40:11.647346 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:40:11.647597 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:40:11.647865 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:40:11.648077 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:40:11.648147 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:40:11.648163 kernel: Initialise system trusted keyrings Jan 17 00:40:11.648176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:40:11.648188 kernel: Key type asymmetric registered Jan 17 00:40:11.648200 kernel: Asymmetric key parser 'x509' registered Jan 17 00:40:11.648212 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:40:11.648232 kernel: io scheduler mq-deadline registered Jan 17 00:40:11.648245 kernel: io scheduler kyber registered Jan 17 00:40:11.648257 kernel: io scheduler bfq registered Jan 17 00:40:11.648270 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:40:11.648283 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:40:11.648325 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:40:11.648339 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:40:11.648351 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:40:11.648364 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:40:11.648382 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:40:11.648397 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:40:11.648408 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:40:11.648719 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:40:11.648948 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:40:11.648969 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:40:11.649239 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:40:10 UTC (1768610410) Jan 17 00:40:11.649492 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:40:11.649521 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:40:11.649534 kernel: efifb: probing for efifb Jan 17 00:40:11.649547 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:40:11.649561 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:40:11.649573 kernel: efifb: scrolling: redraw Jan 17 00:40:11.649586 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:40:11.649600 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:40:11.649613 kernel: fb0: EFI VGA frame buffer device Jan 17 00:40:11.649626 kernel: pstore: Using crash dump compression: deflate Jan 17 00:40:11.649644 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:40:11.649658 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:40:11.649670 kernel: Segment Routing with IPv6 Jan 17 00:40:11.649683 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:40:11.649697 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:40:11.649708 kernel: Key type dns_resolver registered Jan 17 00:40:11.649722 kernel: IPI shorthand broadcast: enabled Jan 17 00:40:11.649767 kernel: sched_clock: Marking stable (2063023578, 681951367)->(3732791113, -987816168) Jan 17 00:40:11.649786 kernel: registered taskstats version 1 Jan 17 00:40:11.649804 kernel: Loading compiled-in X.509 certificates Jan 17 00:40:11.649817 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:40:11.649831 kernel: Key type .fscrypt registered Jan 17 00:40:11.649843 kernel: Key type fscrypt-provisioning registered Jan 17 00:40:11.649857 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:40:11.649871 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:40:11.649884 kernel: ima: No architecture policies found Jan 17 00:40:11.649898 kernel: clk: Disabling unused clocks Jan 17 00:40:11.649911 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:40:11.649930 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:40:11.649944 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:40:11.649957 kernel: Run /init as init process Jan 17 00:40:11.649971 kernel: with arguments: Jan 17 00:40:11.649983 kernel: /init Jan 17 00:40:11.649998 kernel: with environment: Jan 17 00:40:11.650010 kernel: HOME=/ Jan 17 00:40:11.650024 kernel: TERM=linux Jan 17 00:40:11.650040 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:40:11.650065 systemd[1]: Detected virtualization kvm. Jan 17 00:40:11.650081 systemd[1]: Detected architecture x86-64. Jan 17 00:40:11.650148 systemd[1]: Running in initrd. Jan 17 00:40:11.650163 systemd[1]: No hostname configured, using default hostname. Jan 17 00:40:11.650177 systemd[1]: Hostname set to . Jan 17 00:40:11.650191 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:40:11.650212 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:40:11.650225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:40:11.650241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:40:11.650256 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:40:11.650271 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:40:11.650285 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:40:11.650335 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:40:11.650352 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:40:11.650365 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:40:11.650379 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:40:11.650391 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:40:11.650405 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:40:11.650424 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:40:11.650439 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:40:11.650453 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:40:11.650466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:40:11.650480 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:40:11.650492 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:40:11.650504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:40:11.650516 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:40:11.650528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:40:11.650545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:40:11.650556 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:40:11.650568 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:40:11.650580 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:40:11.650591 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:40:11.650603 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:40:11.650615 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:40:11.650626 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:40:11.650677 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:40:11.650717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:40:11.650731 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:40:11.650744 systemd-journald[194]: Journal started Jan 17 00:40:11.650772 systemd-journald[194]: Runtime Journal (/run/log/journal/a223240c87ed4c6e8ae7529a83412908) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:40:11.665141 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:40:11.674538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:40:11.675015 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:40:11.711351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:40:11.719705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:40:11.747081 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:40:11.758461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:40:11.801032 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:40:11.802461 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:40:11.815994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:40:11.819809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:40:11.878046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:40:11.895590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:40:11.905720 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:40:11.940494 dracut-cmdline[226]: dracut-dracut-053 Jan 17 00:40:11.945976 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:40:11.977603 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:40:11.989991 kernel: Bridge firewalling registered Jan 17 00:40:11.988220 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:40:11.994187 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:40:12.015405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:40:12.041893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:40:12.064528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:40:12.127215 kernel: SCSI subsystem initialized Jan 17 00:40:12.128266 systemd-resolved[284]: Positive Trust Anchors: Jan 17 00:40:12.128280 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:40:12.128356 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:40:12.132399 systemd-resolved[284]: Defaulting to hostname 'linux'. Jan 17 00:40:12.195759 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:40:12.134863 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:40:12.157583 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:40:12.228845 kernel: iscsi: registered transport (tcp) Jan 17 00:40:12.264747 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:40:12.264826 kernel: QLogic iSCSI HBA Driver Jan 17 00:40:12.384219 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:40:12.413899 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:40:12.495861 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:40:12.495930 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:40:12.502421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:40:12.594384 kernel: raid6: avx2x4 gen() 19870 MB/s Jan 17 00:40:12.614368 kernel: raid6: avx2x2 gen() 12449 MB/s Jan 17 00:40:12.635392 kernel: raid6: avx2x1 gen() 12273 MB/s Jan 17 00:40:12.635477 kernel: raid6: using algorithm avx2x4 gen() 19870 MB/s Jan 17 00:40:12.657525 kernel: raid6: .... xor() 4753 MB/s, rmw enabled Jan 17 00:40:12.657601 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:40:12.696499 kernel: xor: automatically using best checksumming function avx Jan 17 00:40:13.047544 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:40:13.082425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:40:13.116585 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:40:13.152838 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 17 00:40:13.166576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:40:13.191693 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:40:13.215927 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 17 00:40:13.294468 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:40:13.329341 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:40:13.503666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:40:13.518557 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:40:13.555000 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:40:13.563145 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:40:13.574351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:40:13.582937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:40:13.606171 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:40:13.611510 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:40:13.624754 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:40:13.635450 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:40:13.652794 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:40:13.652857 kernel: GPT:9289727 != 19775487 Jan 17 00:40:13.652878 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:40:13.652896 kernel: GPT:9289727 != 19775487 Jan 17 00:40:13.652912 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:40:13.652927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:40:13.651269 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:40:13.696532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:40:13.702420 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:40:13.720181 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:40:13.724472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:40:13.724846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:40:13.738536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:40:13.813191 kernel: libata version 3.00 loaded. Jan 17 00:40:13.829526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:40:13.902814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:40:13.940333 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (472) Jan 17 00:40:13.906557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:40:13.957577 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Jan 17 00:40:13.978193 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:40:14.053046 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:40:14.070913 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:40:14.071573 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:40:14.118608 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:40:14.119933 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:40:14.121177 kernel: AES CTR mode by8 optimization enabled Jan 17 00:40:14.120019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:40:14.143452 kernel: scsi host0: ahci Jan 17 00:40:14.143954 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:40:14.183538 kernel: scsi host1: ahci Jan 17 00:40:14.183962 kernel: scsi host2: ahci Jan 17 00:40:14.184377 kernel: scsi host3: ahci Jan 17 00:40:14.184705 kernel: scsi host4: ahci Jan 17 00:40:14.154220 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:40:14.255943 kernel: scsi host5: ahci Jan 17 00:40:14.256511 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:40:14.256573 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:40:14.256589 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:40:14.256604 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:40:14.256619 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:40:14.256633 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:40:14.232979 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:40:14.332551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:40:14.366954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:40:14.371007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:40:14.434049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:40:14.467264 disk-uuid[563]: Primary Header is updated. Jan 17 00:40:14.467264 disk-uuid[563]: Secondary Entries is updated. Jan 17 00:40:14.467264 disk-uuid[563]: Secondary Header is updated. Jan 17 00:40:14.485552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:40:14.528270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:40:14.569493 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:40:14.605204 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:40:14.607645 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:40:14.629266 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:40:14.629370 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:40:14.640406 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:40:14.640470 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:40:14.644465 kernel: ata3.00: applying bridge limits Jan 17 00:40:14.655350 kernel: ata3.00: configured for UDMA/100 Jan 17 00:40:14.661265 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:40:14.840572 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:40:14.851624 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:40:14.879157 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:40:15.566462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:40:15.578062 disk-uuid[564]: The operation has completed successfully. Jan 17 00:40:15.746340 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:40:15.750528 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:40:15.778594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:40:15.840328 sh[601]: Success Jan 17 00:40:15.925683 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:40:16.124375 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:40:16.171983 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:40:16.185719 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:40:16.282521 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:40:16.282609 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:40:16.282647 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:40:16.308192 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:40:16.308415 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:40:16.376885 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:40:16.385218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:40:16.426805 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:40:16.431926 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:40:16.478932 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:40:16.478978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:40:16.482432 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:40:16.529745 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:40:16.563852 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:40:16.578287 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:40:16.658501 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:40:16.727443 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:40:17.233794 ignition[717]: Ignition 2.19.0 Jan 17 00:40:17.235590 ignition[717]: Stage: fetch-offline Jan 17 00:40:17.235689 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:40:17.235705 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:40:17.235890 ignition[717]: parsed url from cmdline: "" Jan 17 00:40:17.235896 ignition[717]: no config URL provided Jan 17 00:40:17.235905 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:40:17.235920 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:40:17.236022 ignition[717]: op(1): [started] loading QEMU firmware config module Jan 17 00:40:17.349011 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:40:17.236030 ignition[717]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:40:17.366552 ignition[717]: op(1): [finished] loading QEMU firmware config module Jan 17 00:40:17.525157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:40:17.767427 systemd-networkd[788]: lo: Link UP Jan 17 00:40:17.774605 systemd-networkd[788]: lo: Gained carrier Jan 17 00:40:17.824340 systemd-networkd[788]: Enumeration completed Jan 17 00:40:17.835617 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:40:17.846196 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:40:17.846204 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:40:17.851226 systemd[1]: Reached target network.target - Network. Jan 17 00:40:17.877060 systemd-networkd[788]: eth0: Link UP Jan 17 00:40:17.877070 systemd-networkd[788]: eth0: Gained carrier Jan 17 00:40:17.877296 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:40:18.072567 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:40:18.144204 ignition[717]: parsing config with SHA512: d246c73a245481dca2c6b2988d93cf4f1e8c03d9280a35e3424dc445a73e7e44274456aa5cb70545e8a5940d9a3f3952c28b2bb53283f7ed8878f614042bfad5 Jan 17 00:40:18.172504 unknown[717]: fetched base config from "system" Jan 17 00:40:18.172540 unknown[717]: fetched user config from "qemu" Jan 17 00:40:18.173174 ignition[717]: fetch-offline: fetch-offline passed Jan 17 00:40:18.233382 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:40:18.173303 ignition[717]: Ignition finished successfully Jan 17 00:40:18.276393 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:40:18.354034 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:40:18.740600 ignition[792]: Ignition 2.19.0 Jan 17 00:40:18.742842 ignition[792]: Stage: kargs Jan 17 00:40:18.743218 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:40:18.743238 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:40:18.747983 ignition[792]: kargs: kargs passed Jan 17 00:40:18.748071 ignition[792]: Ignition finished successfully Jan 17 00:40:18.833421 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:40:18.863915 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:40:19.035978 ignition[799]: Ignition 2.19.0 Jan 17 00:40:19.036008 ignition[799]: Stage: disks Jan 17 00:40:19.040478 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:40:19.040519 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:40:19.044727 ignition[799]: disks: disks passed Jan 17 00:40:19.058450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:40:19.044804 ignition[799]: Ignition finished successfully Jan 17 00:40:19.066188 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:40:19.066263 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:40:19.066370 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:40:19.066428 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:40:19.066467 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:40:19.138685 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:40:19.270767 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:40:19.291890 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:40:19.366989 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:40:19.782535 systemd-networkd[788]: eth0: Gained IPv6LL Jan 17 00:40:20.379542 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:40:20.383798 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:40:20.442081 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:40:20.683152 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:40:20.748863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:40:20.766476 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:40:20.862260 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Jan 17 00:40:20.867425 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:40:20.766573 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:40:20.925228 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:40:20.925293 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:40:20.766621 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:40:20.965720 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:40:20.999472 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:40:21.035882 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:40:21.088904 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:40:21.431694 kernel: hrtimer: interrupt took 13457350 ns Jan 17 00:40:21.642040 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:40:21.670465 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:40:21.754191 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:40:21.927562 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:40:22.786959 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:40:22.816366 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:40:22.836576 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:40:22.862948 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:40:22.875440 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:40:22.968026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:40:22.983155 ignition[931]: INFO : Ignition 2.19.0 Jan 17 00:40:22.983155 ignition[931]: INFO : Stage: mount Jan 17 00:40:22.995161 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:40:22.995161 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:40:22.995161 ignition[931]: INFO : mount: mount passed Jan 17 00:40:22.995161 ignition[931]: INFO : Ignition finished successfully Jan 17 00:40:22.993132 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:40:23.056560 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:40:23.109789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:40:23.145551 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 17 00:40:23.166294 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:40:23.166419 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:40:23.166443 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:40:23.187755 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:40:23.201663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:40:23.271877 ignition[960]: INFO : Ignition 2.19.0 Jan 17 00:40:23.271877 ignition[960]: INFO : Stage: files Jan 17 00:40:23.271877 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:40:23.271877 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:40:23.271877 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:40:23.314017 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:40:23.314017 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:40:23.335399 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:40:23.335399 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:40:23.335399 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:40:23.335399 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:40:23.335399 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:40:23.322490 unknown[960]: wrote ssh authorized keys file for user: core Jan 17 00:40:23.431527 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:40:23.567468 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:40:23.580232 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:40:23.593577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:40:23.668301 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:40:23.668301 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:40:23.668301 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:40:24.042983 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:40:28.440885 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:40:28.440885 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 00:40:28.480573 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:40:28.711992 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:40:28.761967 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:40:28.761967 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:40:28.780523 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:40:28.780523 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:40:28.780523 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:40:28.780523 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:40:28.780523 ignition[960]: INFO : files: files passed Jan 17 00:40:28.780523 ignition[960]: INFO : Ignition finished successfully Jan 17 00:40:28.852999 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:40:28.910931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:40:28.991723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:40:29.038818 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:40:29.039799 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:40:29.216003 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:40:29.240050 initrd-setup-root-after-ignition[992]: grep: Jan 17 00:40:29.240050 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:40:29.260306 initrd-setup-root-after-ignition[992]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:40:29.260306 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:40:29.275456 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:40:29.316642 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:40:29.363593 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:40:29.610820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:40:29.612515 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:40:29.620967 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:40:29.625394 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:40:29.634694 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:40:29.657426 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:40:29.730689 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:40:29.754525 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:40:29.778775 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:40:29.805021 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:40:29.811290 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:40:29.818949 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:40:29.819225 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:40:29.828330 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:40:29.832580 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:40:29.841924 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:40:29.859371 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:40:29.869680 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:40:29.885278 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:40:29.885500 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:40:29.885731 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:40:29.886011 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:40:29.902134 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:40:29.902476 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:40:29.902811 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:40:29.903386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:40:30.106245 ignition[1016]: INFO : Ignition 2.19.0 Jan 17 00:40:30.106245 ignition[1016]: INFO : Stage: umount Jan 17 00:40:30.106245 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:40:30.106245 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:40:29.903555 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:40:30.156703 ignition[1016]: INFO : umount: umount passed Jan 17 00:40:30.156703 ignition[1016]: INFO : Ignition finished successfully Jan 17 00:40:29.903655 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:40:29.909754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:40:29.919832 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:40:29.920219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:40:29.922183 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:40:29.922402 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:40:29.922645 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:40:29.922739 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:40:29.933406 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:40:29.945738 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:40:29.954557 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:40:29.955248 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:40:29.955575 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:40:29.956037 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:40:29.956269 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:40:29.960557 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:40:29.960818 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:40:29.961565 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:40:29.963010 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:40:30.035230 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:40:30.042293 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:40:30.042854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:40:30.053991 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:40:30.073304 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:40:30.076858 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:40:30.094766 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:40:30.094955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:40:30.114189 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:40:30.114418 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:40:30.121002 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:40:30.122538 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:40:30.136052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:40:30.138504 systemd[1]: Stopped target network.target - Network. Jan 17 00:40:30.144948 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:40:30.145063 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:40:30.150430 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:40:30.150508 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:40:30.165890 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:40:30.165973 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:40:30.176272 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:40:30.176398 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:40:30.180385 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:40:30.207395 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:40:30.215626 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:40:30.215806 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:40:30.223474 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:40:30.223601 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:40:30.235217 systemd-networkd[788]: eth0: DHCPv6 lease lost Jan 17 00:40:30.247619 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:40:30.247865 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:40:30.274497 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:40:30.274746 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:40:30.320470 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:40:30.320574 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:40:30.492014 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:40:30.517308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:40:30.517862 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:40:30.524045 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:40:30.524209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:40:30.530615 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:40:30.530814 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:40:30.553193 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:40:30.553309 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:40:30.563622 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:40:30.650897 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:40:30.651252 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:40:30.712059 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:40:30.712243 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:40:30.723264 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:40:30.723387 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:40:30.737015 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:40:30.737186 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:40:30.747177 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:40:30.747264 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:40:30.754222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:40:30.754331 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:40:30.827791 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:40:30.843143 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:40:30.843264 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:40:30.863309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:40:30.863588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:40:30.911428 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:40:30.911635 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:40:30.964999 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:40:30.965229 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:40:30.991522 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:40:31.022431 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:40:31.093873 systemd[1]: Switching root. Jan 17 00:40:31.154644 systemd-journald[194]: Journal stopped Jan 17 00:40:35.794032 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:40:35.794588 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:40:35.794609 kernel: SELinux: policy capability open_perms=1 Jan 17 00:40:35.794624 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:40:35.794643 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:40:35.794657 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:40:35.794997 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:40:35.795015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:40:35.795029 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:40:35.795044 kernel: audit: type=1403 audit(1768610431.646:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:40:35.795069 systemd[1]: Successfully loaded SELinux policy in 104.023ms. Jan 17 00:40:35.795740 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.663ms. Jan 17 00:40:35.795766 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:40:35.795782 systemd[1]: Detected virtualization kvm. Jan 17 00:40:35.795807 systemd[1]: Detected architecture x86-64. Jan 17 00:40:35.795823 systemd[1]: Detected first boot. Jan 17 00:40:35.795838 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:40:35.795852 zram_generator::config[1060]: No configuration found. Jan 17 00:40:35.795876 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:40:35.795896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:40:35.795919 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:40:35.795937 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:40:35.795956 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:40:35.795972 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:40:35.795988 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:40:35.796003 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:40:35.796021 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:40:35.796037 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:40:35.796056 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:40:35.796072 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:40:35.796794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:40:35.796821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:40:35.797031 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:40:35.797049 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:40:35.797066 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:40:35.797084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:40:35.797502 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:40:35.797525 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:40:35.797540 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:40:35.797556 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:40:35.797574 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:40:35.797820 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:40:35.797843 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:40:35.797859 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:40:35.797874 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:40:35.797894 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:40:35.797913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:40:35.797928 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:40:35.797943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:40:35.797959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:40:35.797973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:40:35.797992 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:40:35.798007 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:40:35.799885 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:40:35.799912 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:40:35.799930 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:40:35.799948 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:40:35.799973 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:40:35.799991 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:40:35.800009 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:40:35.800026 systemd[1]: Reached target machines.target - Containers. Jan 17 00:40:35.800043 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:40:35.800065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:40:35.800083 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:40:35.800316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:40:35.800332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:40:35.800511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:40:35.800528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:40:35.800543 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:40:35.800562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:40:35.800580 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:40:35.800600 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:40:35.800616 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:40:35.800635 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:40:35.800986 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:40:35.801009 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:40:35.801024 kernel: loop: module loaded Jan 17 00:40:35.801041 kernel: fuse: init (API version 7.39) Jan 17 00:40:35.801056 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:40:35.801071 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:40:35.801532 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:40:35.801552 kernel: ACPI: bus type drm_connector registered Jan 17 00:40:35.801595 systemd-journald[1144]: Collecting audit messages is disabled. Jan 17 00:40:35.801628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:40:35.801645 systemd-journald[1144]: Journal started Jan 17 00:40:35.801673 systemd-journald[1144]: Runtime Journal (/run/log/journal/a223240c87ed4c6e8ae7529a83412908) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:40:34.223663 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:40:34.276022 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:40:34.277070 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:40:34.279518 systemd[1]: systemd-journald.service: Consumed 1.907s CPU time. Jan 17 00:40:35.814967 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:40:35.815556 systemd[1]: Stopped verity-setup.service. Jan 17 00:40:35.843798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:40:35.857173 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:40:35.898472 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:40:35.903633 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:40:35.930217 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:40:35.940010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:40:35.946209 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:40:35.950873 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:40:35.955415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:40:35.971621 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:40:35.980224 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:40:35.981066 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:40:36.085596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:40:36.111592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:40:36.157983 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:40:36.158412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:40:36.166765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:40:36.167023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:40:36.178989 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:40:36.179285 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:40:36.189905 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:40:36.194879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:40:36.220944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:40:36.241947 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:40:36.268433 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:40:36.327028 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:40:36.372036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:40:36.398573 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:40:36.402487 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:40:36.402574 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:40:36.410800 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:40:36.429347 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:40:36.437140 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:40:36.442014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:40:36.448485 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:40:36.454488 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:40:36.467420 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:40:36.470342 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:40:36.482676 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:40:36.492248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:40:36.522420 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:40:36.571267 systemd-journald[1144]: Time spent on flushing to /var/log/journal/a223240c87ed4c6e8ae7529a83412908 is 70.528ms for 984 entries. Jan 17 00:40:36.571267 systemd-journald[1144]: System Journal (/var/log/journal/a223240c87ed4c6e8ae7529a83412908) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:40:37.579520 systemd-journald[1144]: Received client request to flush runtime journal. Jan 17 00:40:37.579684 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:40:36.544653 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:40:36.560540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:40:36.578916 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:40:36.614161 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:40:36.769682 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:40:36.883600 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:40:36.907531 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:40:36.940164 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:40:36.959525 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:40:37.053699 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:40:37.603876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:40:37.609537 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:40:37.628441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:40:37.631252 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:40:37.633331 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:40:37.666678 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:40:37.726836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:40:37.767811 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 00:40:38.055234 kernel: loop2: detected capacity change from 0 to 219144 Jan 17 00:40:38.151462 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 17 00:40:38.151490 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 17 00:40:38.544030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:40:38.622410 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:40:38.716545 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:40:38.773177 kernel: loop5: detected capacity change from 0 to 219144 Jan 17 00:40:38.849700 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:40:38.854213 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 17 00:40:38.865547 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:40:38.866213 systemd[1]: Reloading... Jan 17 00:40:39.079322 zram_generator::config[1224]: No configuration found. Jan 17 00:40:39.362072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:40:39.906575 systemd[1]: Reloading finished in 1039 ms. Jan 17 00:40:39.979028 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:40:39.991022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:40:40.030438 systemd[1]: Starting ensure-sysext.service... Jan 17 00:40:40.037086 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:40:40.052347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:40:40.063011 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:40:40.063031 systemd[1]: Reloading... Jan 17 00:40:40.192530 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:40:40.217825 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:40:40.220533 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Jan 17 00:40:40.222660 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:40:40.227061 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:40:40.230182 zram_generator::config[1287]: No configuration found. Jan 17 00:40:40.227799 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 17 00:40:40.227931 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 17 00:40:40.237680 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:40:40.237855 systemd-tmpfiles[1262]: Skipping /boot Jan 17 00:40:40.333935 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:40:40.333973 systemd-tmpfiles[1262]: Skipping /boot Jan 17 00:40:40.962041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:40:40.970960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1321) Jan 17 00:40:41.173186 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:40:41.173979 systemd[1]: Reloading finished in 1110 ms. Jan 17 00:40:41.222183 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:40:41.228060 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:40:41.418720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:40:41.436335 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:40:41.521912 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:40:41.536200 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:40:41.571288 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:40:41.572187 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:40:41.576803 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:40:41.583408 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:40:41.680784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:40:41.715994 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:40:41.726137 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:40:41.732596 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:40:41.742574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:40:41.747595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:40:41.764456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:40:41.864223 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:40:41.871474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:40:41.879901 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:40:41.930718 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:40:41.958020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:40:41.968304 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:40:41.976887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:40:41.987999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:40:41.989043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:40:42.041223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:40:42.044740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:40:42.072745 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:40:42.106486 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:40:42.213993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:40:42.219771 augenrules[1387]: No rules Jan 17 00:40:42.259861 systemd[1]: Finished ensure-sysext.service. Jan 17 00:40:42.294143 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:40:42.307605 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:40:42.317226 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:40:42.338013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:40:42.360319 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:40:42.360609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:40:42.434573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:40:42.447458 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:40:42.463311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:40:42.498941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:40:42.540239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:40:42.550790 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:40:42.573414 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:40:42.596560 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:40:42.616492 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:40:42.637161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:40:42.647982 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:40:42.648058 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:40:42.651759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:40:42.652049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:40:42.665692 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:40:42.665972 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:40:42.683449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:40:42.683733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:40:42.689558 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:40:42.689869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:40:42.694685 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:40:42.700602 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:40:42.715330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:40:42.715520 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:40:42.895465 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:40:42.975780 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:40:43.350578 systemd-networkd[1373]: lo: Link UP Jan 17 00:40:43.350587 systemd-networkd[1373]: lo: Gained carrier Jan 17 00:40:43.359045 systemd-networkd[1373]: Enumeration completed Jan 17 00:40:43.360478 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:40:43.361956 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:40:43.362787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:40:43.404315 systemd-networkd[1373]: eth0: Link UP Jan 17 00:40:43.404332 systemd-networkd[1373]: eth0: Gained carrier Jan 17 00:40:43.406214 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:40:43.469038 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:40:43.487784 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:40:43.494655 systemd-resolved[1377]: Positive Trust Anchors: Jan 17 00:40:43.494682 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:40:43.494737 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:40:43.500903 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:40:43.502595 systemd-resolved[1377]: Defaulting to hostname 'linux'. Jan 17 00:40:43.510507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:40:43.520589 systemd[1]: Reached target network.target - Network. Jan 17 00:40:43.529976 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:40:43.557312 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:40:43.560799 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 17 00:40:43.569212 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:40:43.569612 systemd-timesyncd[1402]: Initial clock synchronization to Sat 2026-01-17 00:40:43.738602 UTC. Jan 17 00:40:43.933442 kernel: kvm_amd: TSC scaling supported Jan 17 00:40:43.933686 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:40:43.933712 kernel: kvm_amd: Nested Paging enabled Jan 17 00:40:43.940484 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:40:43.940572 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:40:44.258917 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:40:44.324427 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:40:44.369520 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:40:44.405214 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:40:44.482027 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:40:44.533002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:40:44.550675 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:40:44.562911 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:40:44.578650 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:40:44.592488 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:40:44.604999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:40:44.613531 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:40:44.628316 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:40:44.629542 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:40:44.638419 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:40:44.647345 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:40:44.666532 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:40:44.694335 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:40:44.731449 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:40:44.742588 systemd-networkd[1373]: eth0: Gained IPv6LL Jan 17 00:40:44.754037 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:40:44.763175 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:40:44.783888 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:40:44.793659 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:40:44.807431 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:40:44.815644 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:40:44.815687 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:40:44.827781 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:40:44.839937 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:40:44.840635 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:40:44.871241 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:40:44.885975 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:40:44.901766 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:40:44.917081 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:40:44.917970 jq[1439]: false Jan 17 00:40:44.928781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:44.943415 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:40:44.960421 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:40:45.012366 extend-filesystems[1440]: Found loop3 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found loop4 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found loop5 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found sr0 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found vda Jan 17 00:40:45.012366 extend-filesystems[1440]: Found vda1 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found vda2 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found vda3 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found usr Jan 17 00:40:45.012366 extend-filesystems[1440]: Found vda4 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found vda6 Jan 17 00:40:45.012366 extend-filesystems[1440]: Found vda7 Jan 17 00:40:45.169570 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:40:45.013593 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:40:45.169777 extend-filesystems[1440]: Found vda9 Jan 17 00:40:45.169777 extend-filesystems[1440]: Checking size of /dev/vda9 Jan 17 00:40:45.169777 extend-filesystems[1440]: Resized partition /dev/vda9 Jan 17 00:40:45.212431 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1339) Jan 17 00:40:45.045095 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:40:45.201651 dbus-daemon[1438]: [system] SELinux support is enabled Jan 17 00:40:45.280212 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:40:45.074961 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:40:45.093098 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:40:45.102703 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:40:45.103518 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:40:45.108728 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:40:45.285595 update_engine[1460]: I20260117 00:40:45.272294 1460 main.cc:92] Flatcar Update Engine starting Jan 17 00:40:45.285595 update_engine[1460]: I20260117 00:40:45.274977 1460 update_check_scheduler.cc:74] Next update check in 3m44s Jan 17 00:40:45.123284 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:40:45.124964 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:40:45.136985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:40:45.137325 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:40:45.306247 jq[1461]: true Jan 17 00:40:45.141055 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:40:45.141367 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:40:45.153192 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:40:45.306839 jq[1478]: true Jan 17 00:40:45.153482 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:40:45.206622 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:40:45.219688 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:40:45.239529 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:40:45.243151 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:40:45.350186 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:40:45.350845 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:40:45.456847 tar[1466]: linux-amd64/LICENSE Jan 17 00:40:45.362429 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:40:45.396491 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:40:45.474602 tar[1466]: linux-amd64/helm Jan 17 00:40:45.474670 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:40:45.474670 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:40:45.474670 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:40:45.396950 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:40:45.569227 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Jan 17 00:40:45.396992 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:40:45.402168 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:40:45.402204 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:40:45.434666 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:40:45.454059 systemd-logind[1459]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:40:45.454095 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:40:45.460782 systemd-logind[1459]: New seat seat0. Jan 17 00:40:45.464622 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:40:45.486805 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:40:45.489581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:40:45.715084 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:40:45.783259 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:40:45.790763 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:40:45.817044 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:40:45.858634 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:40:45.866031 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:40:45.872317 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:40:45.964352 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:40:45.966043 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:40:46.013287 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:40:46.275930 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:40:46.357411 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:40:46.375805 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:40:46.383272 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:40:47.014668 containerd[1476]: time="2026-01-17T00:40:47.012482352Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:40:47.171462 containerd[1476]: time="2026-01-17T00:40:47.170671988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:40:47.177746 containerd[1476]: time="2026-01-17T00:40:47.177437299Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:40:47.177746 containerd[1476]: time="2026-01-17T00:40:47.177482333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:40:47.177746 containerd[1476]: time="2026-01-17T00:40:47.177583417Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178238735Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178264128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178348182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178366919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178779709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178800621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178818149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.178833390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:40:47.179132 containerd[1476]: time="2026-01-17T00:40:47.179012655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:40:47.180049 containerd[1476]: time="2026-01-17T00:40:47.179698821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:40:47.196874 containerd[1476]: time="2026-01-17T00:40:47.196025183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:40:47.196874 containerd[1476]: time="2026-01-17T00:40:47.196091048Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:40:47.196874 containerd[1476]: time="2026-01-17T00:40:47.196449698Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:40:47.196874 containerd[1476]: time="2026-01-17T00:40:47.196567102Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:40:47.327954 containerd[1476]: time="2026-01-17T00:40:47.325334021Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:40:47.331652 containerd[1476]: time="2026-01-17T00:40:47.329685950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:40:47.331652 containerd[1476]: time="2026-01-17T00:40:47.329866963Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:40:47.331652 containerd[1476]: time="2026-01-17T00:40:47.329966451Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:40:47.331652 containerd[1476]: time="2026-01-17T00:40:47.330269205Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:40:47.331652 containerd[1476]: time="2026-01-17T00:40:47.330936726Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.333329715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.333642965Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.333665299Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.333687258Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.333705223Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.333850732Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.333908213Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.334001067Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.334067775Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.334084917Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.334174905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.334191935Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.334360845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336241 containerd[1476]: time="2026-01-17T00:40:47.334454450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334472872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334490461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334506537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334522876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334539307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334556134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334574465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334649465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334667857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334683647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334730968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334753140Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334783054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334828455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.336710 containerd[1476]: time="2026-01-17T00:40:47.334875135Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335065648Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335090746Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335170501Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335190794Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335204186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335221348Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335236326Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:40:47.337061 containerd[1476]: time="2026-01-17T00:40:47.335297283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.340363476Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.340497563Z" level=info msg="Connect containerd service" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.340555117Z" level=info msg="using legacy CRI server" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.340566233Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.340762578Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.342303995Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.342732148Z" level=info msg="Start subscribing containerd event" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.342787132Z" level=info msg="Start recovering state" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.342866948Z" level=info msg="Start event monitor" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.342894281Z" level=info msg="Start snapshots syncer" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.342905845Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:40:47.343380 containerd[1476]: time="2026-01-17T00:40:47.342917612Z" level=info msg="Start streaming server" Jan 17 00:40:47.352483 containerd[1476]: time="2026-01-17T00:40:47.343701824Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:40:47.352483 containerd[1476]: time="2026-01-17T00:40:47.343987761Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:40:47.352483 containerd[1476]: time="2026-01-17T00:40:47.345305987Z" level=info msg="containerd successfully booted in 0.337272s" Jan 17 00:40:47.345471 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:40:48.376378 tar[1466]: linux-amd64/README.md Jan 17 00:40:48.424576 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:40:50.064409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:50.070547 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:40:50.080667 systemd[1]: Startup finished in 2.346s (kernel) + 20.631s (initrd) + 18.530s (userspace) = 41.508s. Jan 17 00:40:50.133380 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:40:52.313619 kubelet[1550]: E0117 00:40:52.311639 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:40:52.319908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:40:52.320299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:40:52.325930 systemd[1]: kubelet.service: Consumed 3.546s CPU time. Jan 17 00:40:54.430069 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:40:54.458265 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:56944.service - OpenSSH per-connection server daemon (10.0.0.1:56944). Jan 17 00:40:54.679779 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 56944 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:40:54.691339 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:40:54.727691 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:40:54.745710 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:40:54.764661 systemd-logind[1459]: New session 1 of user core. Jan 17 00:40:54.810659 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:40:54.831589 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:40:54.848786 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:40:55.306734 systemd[1568]: Queued start job for default target default.target. Jan 17 00:40:55.325575 systemd[1568]: Created slice app.slice - User Application Slice. Jan 17 00:40:55.325639 systemd[1568]: Reached target paths.target - Paths. Jan 17 00:40:55.325659 systemd[1568]: Reached target timers.target - Timers. Jan 17 00:40:55.335436 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:40:55.367905 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:40:55.368018 systemd[1568]: Reached target sockets.target - Sockets. Jan 17 00:40:55.368041 systemd[1568]: Reached target basic.target - Basic System. Jan 17 00:40:55.368180 systemd[1568]: Reached target default.target - Main User Target. Jan 17 00:40:55.368279 systemd[1568]: Startup finished in 345ms. Jan 17 00:40:55.369779 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:40:55.386893 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:40:55.540869 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:56954.service - OpenSSH per-connection server daemon (10.0.0.1:56954). Jan 17 00:40:55.642365 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 56954 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:40:55.643437 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:40:55.661604 systemd-logind[1459]: New session 2 of user core. Jan 17 00:40:55.671475 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:40:55.831887 sshd[1579]: pam_unix(sshd:session): session closed for user core Jan 17 00:40:55.851906 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:56954.service: Deactivated successfully. Jan 17 00:40:55.854787 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:40:55.857880 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:40:55.869664 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:56964.service - OpenSSH per-connection server daemon (10.0.0.1:56964). Jan 17 00:40:55.871815 systemd-logind[1459]: Removed session 2. Jan 17 00:40:55.939795 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 56964 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:40:55.951929 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:40:55.976454 systemd-logind[1459]: New session 3 of user core. Jan 17 00:40:55.985558 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:40:56.056870 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 17 00:40:56.069405 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:56964.service: Deactivated successfully. Jan 17 00:40:56.071701 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:40:56.075612 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:40:56.091068 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:56980.service - OpenSSH per-connection server daemon (10.0.0.1:56980). Jan 17 00:40:56.096224 systemd-logind[1459]: Removed session 3. Jan 17 00:40:56.151940 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 56980 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:40:56.153713 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:40:56.166290 systemd-logind[1459]: New session 4 of user core. Jan 17 00:40:56.172318 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:40:56.259741 sshd[1593]: pam_unix(sshd:session): session closed for user core Jan 17 00:40:56.278359 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:56980.service: Deactivated successfully. Jan 17 00:40:56.288829 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:40:56.298024 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:40:56.306311 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:56982.service - OpenSSH per-connection server daemon (10.0.0.1:56982). Jan 17 00:40:56.317152 systemd-logind[1459]: Removed session 4. Jan 17 00:40:56.441880 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 56982 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:40:56.450635 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:40:56.474946 systemd-logind[1459]: New session 5 of user core. Jan 17 00:40:56.489897 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:40:56.616227 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:40:56.616689 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:40:56.643538 sudo[1603]: pam_unix(sudo:session): session closed for user root Jan 17 00:40:56.655849 sshd[1600]: pam_unix(sshd:session): session closed for user core Jan 17 00:40:56.674037 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:56982.service: Deactivated successfully. Jan 17 00:40:56.689649 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:40:56.693612 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:40:56.716705 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:56988.service - OpenSSH per-connection server daemon (10.0.0.1:56988). Jan 17 00:40:56.719924 systemd-logind[1459]: Removed session 5. Jan 17 00:40:56.802280 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 56988 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:40:56.808811 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:40:56.826638 systemd-logind[1459]: New session 6 of user core. Jan 17 00:40:56.835514 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:40:56.920658 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:40:56.923216 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:40:56.939555 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 17 00:40:56.959033 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:40:56.959810 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:40:56.994186 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:40:57.015592 auditctl[1615]: No rules Jan 17 00:40:57.017062 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:40:57.020166 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:40:57.044242 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:40:57.116995 augenrules[1633]: No rules Jan 17 00:40:57.119285 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:40:57.120894 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 17 00:40:57.124231 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 17 00:40:57.159045 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:56988.service: Deactivated successfully. Jan 17 00:40:57.163860 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:40:57.173701 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:40:57.186348 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:57004.service - OpenSSH per-connection server daemon (10.0.0.1:57004). Jan 17 00:40:57.199083 systemd-logind[1459]: Removed session 6. Jan 17 00:40:57.268032 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 57004 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:40:57.270822 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:40:57.283531 systemd-logind[1459]: New session 7 of user core. Jan 17 00:40:57.293345 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:40:57.401289 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:40:57.401805 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:40:59.756692 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:40:59.776459 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:41:02.207864 dockerd[1662]: time="2026-01-17T00:41:02.206718664Z" level=info msg="Starting up" Jan 17 00:41:02.466370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:41:02.509016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:41:03.057026 systemd[1]: var-lib-docker-metacopy\x2dcheck45544421-merged.mount: Deactivated successfully. Jan 17 00:41:03.367300 dockerd[1662]: time="2026-01-17T00:41:03.356796653Z" level=info msg="Loading containers: start." Jan 17 00:41:03.588256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:41:03.589017 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:41:03.927737 kubelet[1699]: E0117 00:41:03.927013 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:41:03.943305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:41:03.943848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:41:04.116969 kernel: Initializing XFRM netlink socket Jan 17 00:41:04.390517 systemd-networkd[1373]: docker0: Link UP Jan 17 00:41:04.447512 dockerd[1662]: time="2026-01-17T00:41:04.443608132Z" level=info msg="Loading containers: done." Jan 17 00:41:04.538502 dockerd[1662]: time="2026-01-17T00:41:04.537402516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:41:04.538502 dockerd[1662]: time="2026-01-17T00:41:04.537647904Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:41:04.538502 dockerd[1662]: time="2026-01-17T00:41:04.537852827Z" level=info msg="Daemon has completed initialization" Jan 17 00:41:04.795032 dockerd[1662]: time="2026-01-17T00:41:04.792252134Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:41:04.796467 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:41:07.390074 containerd[1476]: time="2026-01-17T00:41:07.388307549Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:41:08.229935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906581073.mount: Deactivated successfully. Jan 17 00:41:14.085519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:41:14.112549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:41:14.162756 containerd[1476]: time="2026-01-17T00:41:14.161266895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:14.189912 containerd[1476]: time="2026-01-17T00:41:14.188640311Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 17 00:41:14.193720 containerd[1476]: time="2026-01-17T00:41:14.193227836Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:14.214875 containerd[1476]: time="2026-01-17T00:41:14.214770412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:14.217157 containerd[1476]: time="2026-01-17T00:41:14.216763922Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 6.828402931s" Jan 17 00:41:14.217157 containerd[1476]: time="2026-01-17T00:41:14.216828930Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:41:14.230471 containerd[1476]: time="2026-01-17T00:41:14.229561970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:41:14.609406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:41:14.633864 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:41:16.101326 kubelet[1897]: E0117 00:41:16.101032 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:41:16.246060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:41:16.251676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:41:16.275077 systemd[1]: kubelet.service: Consumed 1.784s CPU time. Jan 17 00:41:22.763632 containerd[1476]: time="2026-01-17T00:41:22.763227783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:22.763632 containerd[1476]: time="2026-01-17T00:41:22.765697654Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 17 00:41:22.774214 containerd[1476]: time="2026-01-17T00:41:22.774065935Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:22.796570 containerd[1476]: time="2026-01-17T00:41:22.796324967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:22.884850 containerd[1476]: time="2026-01-17T00:41:22.882667749Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 8.652826191s" Jan 17 00:41:22.888080 containerd[1476]: time="2026-01-17T00:41:22.887493973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:41:22.909186 containerd[1476]: time="2026-01-17T00:41:22.900175868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:41:26.468940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:41:26.494557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:41:27.302396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:41:27.307012 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:41:27.631776 containerd[1476]: time="2026-01-17T00:41:27.620167805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:27.631776 containerd[1476]: time="2026-01-17T00:41:27.628062474Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 17 00:41:27.647004 containerd[1476]: time="2026-01-17T00:41:27.644972695Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:27.660614 containerd[1476]: time="2026-01-17T00:41:27.655711563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:27.660614 containerd[1476]: time="2026-01-17T00:41:27.660060025Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 4.759801141s" Jan 17 00:41:27.660614 containerd[1476]: time="2026-01-17T00:41:27.660168038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:41:27.666340 containerd[1476]: time="2026-01-17T00:41:27.665790557Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:41:27.736473 kubelet[1921]: E0117 00:41:27.735274 1921 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:41:27.762186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:41:27.762461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:41:30.628847 update_engine[1460]: I20260117 00:41:30.605509 1460 update_attempter.cc:509] Updating boot flags... Jan 17 00:41:30.952145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1942) Jan 17 00:41:31.151170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1941) Jan 17 00:41:31.483043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656386433.mount: Deactivated successfully. Jan 17 00:41:33.376120 containerd[1476]: time="2026-01-17T00:41:33.373569960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:33.376120 containerd[1476]: time="2026-01-17T00:41:33.376067820Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 17 00:41:33.380626 containerd[1476]: time="2026-01-17T00:41:33.380588324Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:33.390627 containerd[1476]: time="2026-01-17T00:41:33.390524589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:33.392707 containerd[1476]: time="2026-01-17T00:41:33.391747715Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 5.725918913s" Jan 17 00:41:33.392707 containerd[1476]: time="2026-01-17T00:41:33.391810648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:41:33.401390 containerd[1476]: time="2026-01-17T00:41:33.400789265Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:41:34.687538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72114187.mount: Deactivated successfully. Jan 17 00:41:37.995783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:41:38.058609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:41:38.617379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:41:38.731767 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:41:38.796740 containerd[1476]: time="2026-01-17T00:41:38.796377300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:38.800353 containerd[1476]: time="2026-01-17T00:41:38.800234781Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 17 00:41:38.803137 containerd[1476]: time="2026-01-17T00:41:38.802662434Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:38.808648 containerd[1476]: time="2026-01-17T00:41:38.808591971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:38.812640 containerd[1476]: time="2026-01-17T00:41:38.812569339Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 5.411730878s" Jan 17 00:41:38.812640 containerd[1476]: time="2026-01-17T00:41:38.812627742Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:41:38.824316 containerd[1476]: time="2026-01-17T00:41:38.824245476Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:41:39.024288 kubelet[2014]: E0117 00:41:39.024013 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:41:39.042518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:41:39.042823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:41:39.741578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032097144.mount: Deactivated successfully. Jan 17 00:41:39.762734 containerd[1476]: time="2026-01-17T00:41:39.762552464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:39.767623 containerd[1476]: time="2026-01-17T00:41:39.767048758Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 17 00:41:39.775227 containerd[1476]: time="2026-01-17T00:41:39.774979999Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:39.781384 containerd[1476]: time="2026-01-17T00:41:39.780469309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:39.785192 containerd[1476]: time="2026-01-17T00:41:39.783738937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 959.428164ms" Jan 17 00:41:39.785192 containerd[1476]: time="2026-01-17T00:41:39.784593035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:41:39.790796 containerd[1476]: time="2026-01-17T00:41:39.789950083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:41:40.629620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965675428.mount: Deactivated successfully. Jan 17 00:41:49.504936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:41:49.528516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:41:50.691406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:41:50.726009 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:41:51.277132 kubelet[2087]: E0117 00:41:51.276613 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:41:51.285459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:41:51.325842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:41:51.368637 systemd[1]: kubelet.service: Consumed 1.008s CPU time. Jan 17 00:41:53.520077 containerd[1476]: time="2026-01-17T00:41:53.518030377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:53.526241 containerd[1476]: time="2026-01-17T00:41:53.520580725Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 17 00:41:53.526241 containerd[1476]: time="2026-01-17T00:41:53.525907304Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:53.532025 containerd[1476]: time="2026-01-17T00:41:53.531951637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:53.534174 containerd[1476]: time="2026-01-17T00:41:53.534038378Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 13.744031066s" Jan 17 00:41:53.534174 containerd[1476]: time="2026-01-17T00:41:53.534132739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:42:02.041586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 00:42:02.788928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:05.611988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:42:05.643382 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:42:05.926281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:06.717775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:06.903596 systemd[1]: Reloading requested from client PID 2134 ('systemctl') (unit session-7.scope)... Jan 17 00:42:06.904455 systemd[1]: Reloading... Jan 17 00:42:07.184864 zram_generator::config[2173]: No configuration found. Jan 17 00:42:07.837584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:42:08.017632 systemd[1]: Reloading finished in 1109 ms. Jan 17 00:42:08.143607 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:42:08.143756 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:42:08.144759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:08.176082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:08.848291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:08.889976 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:42:09.032667 kubelet[2222]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:42:09.033272 kubelet[2222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:42:09.033272 kubelet[2222]: I0117 00:42:09.032799 2222 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:42:09.817758 kubelet[2222]: I0117 00:42:09.816146 2222 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:42:09.817758 kubelet[2222]: I0117 00:42:09.816326 2222 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:42:09.817758 kubelet[2222]: I0117 00:42:09.816848 2222 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:42:09.817758 kubelet[2222]: I0117 00:42:09.816879 2222 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:42:09.824613 kubelet[2222]: I0117 00:42:09.821905 2222 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:42:10.224840 kubelet[2222]: E0117 00:42:10.222856 2222 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:42:10.228857 kubelet[2222]: I0117 00:42:10.228330 2222 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:42:10.241779 kubelet[2222]: E0117 00:42:10.241639 2222 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:42:10.241779 kubelet[2222]: I0117 00:42:10.241775 2222 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:42:10.255773 kubelet[2222]: I0117 00:42:10.251570 2222 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:42:10.255773 kubelet[2222]: I0117 00:42:10.251994 2222 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:42:10.255773 kubelet[2222]: I0117 00:42:10.252028 2222 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:42:10.255773 kubelet[2222]: I0117 00:42:10.252362 2222 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:42:10.256389 kubelet[2222]: I0117 00:42:10.252375 2222 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:42:10.256389 kubelet[2222]: I0117 00:42:10.252591 2222 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:42:10.275977 kubelet[2222]: I0117 00:42:10.275876 2222 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:42:10.280273 kubelet[2222]: I0117 00:42:10.278277 2222 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:42:10.280438 kubelet[2222]: I0117 00:42:10.280275 2222 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:42:10.280438 kubelet[2222]: I0117 00:42:10.280330 2222 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:42:10.280438 kubelet[2222]: I0117 00:42:10.280390 2222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:42:10.284133 kubelet[2222]: E0117 00:42:10.282507 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:42:10.284133 kubelet[2222]: E0117 00:42:10.283871 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:42:10.287005 kubelet[2222]: I0117 00:42:10.286947 2222 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:42:10.296314 kubelet[2222]: I0117 00:42:10.291013 2222 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:42:10.296314 kubelet[2222]: I0117 00:42:10.291062 2222 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:42:10.296314 kubelet[2222]: W0117 00:42:10.291197 2222 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:42:10.300445 kubelet[2222]: I0117 00:42:10.300397 2222 server.go:1262] "Started kubelet" Jan 17 00:42:10.311678 kubelet[2222]: I0117 00:42:10.301501 2222 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:42:10.311678 kubelet[2222]: I0117 00:42:10.311566 2222 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:42:10.312253 kubelet[2222]: I0117 00:42:10.312005 2222 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:42:10.312253 kubelet[2222]: I0117 00:42:10.302505 2222 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:42:10.323148 kubelet[2222]: I0117 00:42:10.302436 2222 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:42:10.323148 kubelet[2222]: I0117 00:42:10.301534 2222 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:42:10.323148 kubelet[2222]: I0117 00:42:10.327470 2222 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:42:10.323148 kubelet[2222]: I0117 00:42:10.327627 2222 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:42:10.323148 kubelet[2222]: I0117 00:42:10.327736 2222 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:42:10.334881 kubelet[2222]: E0117 00:42:10.334820 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:42:10.335292 kubelet[2222]: I0117 00:42:10.335230 2222 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:42:10.335446 kubelet[2222]: I0117 00:42:10.335382 2222 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:42:10.336531 kubelet[2222]: E0117 00:42:10.336483 2222 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:42:10.337140 kubelet[2222]: E0117 00:42:10.336976 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Jan 17 00:42:10.348380 kubelet[2222]: E0117 00:42:10.339278 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5de82490f9ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:42:10.300328398 +0000 UTC m=+1.387852201,LastTimestamp:2026-01-17 00:42:10.300328398 +0000 UTC m=+1.387852201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:42:10.353835 kubelet[2222]: I0117 00:42:10.353512 2222 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:42:10.356492 kubelet[2222]: E0117 00:42:10.355797 2222 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:42:10.356492 kubelet[2222]: I0117 00:42:10.356350 2222 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:42:10.388947 kubelet[2222]: I0117 00:42:10.387741 2222 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:42:10.417563 kubelet[2222]: I0117 00:42:10.417514 2222 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:42:10.417563 kubelet[2222]: I0117 00:42:10.417551 2222 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:42:10.417563 kubelet[2222]: I0117 00:42:10.417596 2222 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:42:10.432137 kubelet[2222]: I0117 00:42:10.429705 2222 policy_none.go:49] "None policy: Start" Jan 17 00:42:10.432137 kubelet[2222]: I0117 00:42:10.429759 2222 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:42:10.432137 kubelet[2222]: I0117 00:42:10.429785 2222 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:42:10.437546 kubelet[2222]: E0117 00:42:10.437498 2222 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:42:10.446419 kubelet[2222]: I0117 00:42:10.446366 2222 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:42:10.446513 kubelet[2222]: I0117 00:42:10.446447 2222 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:42:10.446545 kubelet[2222]: I0117 00:42:10.446512 2222 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:42:10.447318 kubelet[2222]: E0117 00:42:10.446582 2222 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:42:10.447692 kubelet[2222]: E0117 00:42:10.447525 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:42:10.451279 kubelet[2222]: I0117 00:42:10.450151 2222 policy_none.go:47] "Start" Jan 17 00:42:10.472971 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:42:10.504715 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:42:10.523422 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:42:10.526721 kubelet[2222]: E0117 00:42:10.526494 2222 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:42:10.527400 kubelet[2222]: I0117 00:42:10.527312 2222 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:42:10.527400 kubelet[2222]: I0117 00:42:10.527351 2222 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:42:10.527830 kubelet[2222]: I0117 00:42:10.527720 2222 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:42:10.530684 kubelet[2222]: E0117 00:42:10.530656 2222 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:42:10.531083 kubelet[2222]: E0117 00:42:10.531027 2222 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:42:10.641293 kubelet[2222]: E0117 00:42:10.640909 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Jan 17 00:42:10.649513 kubelet[2222]: I0117 00:42:10.649445 2222 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:42:10.651215 kubelet[2222]: E0117 00:42:10.651074 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 17 00:42:10.669014 systemd[1]: Created slice kubepods-burstable-pod93882d24236584d6ce29851ba45d4e24.slice - libcontainer container kubepods-burstable-pod93882d24236584d6ce29851ba45d4e24.slice. Jan 17 00:42:10.692027 kubelet[2222]: E0117 00:42:10.691525 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:10.703460 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 17 00:42:10.711176 kubelet[2222]: E0117 00:42:10.711082 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:10.716592 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 17 00:42:10.733255 kubelet[2222]: E0117 00:42:10.726706 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:10.739874 kubelet[2222]: I0117 00:42:10.739741 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93882d24236584d6ce29851ba45d4e24-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"93882d24236584d6ce29851ba45d4e24\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:10.739874 kubelet[2222]: I0117 00:42:10.739824 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93882d24236584d6ce29851ba45d4e24-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"93882d24236584d6ce29851ba45d4e24\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:10.739874 kubelet[2222]: I0117 00:42:10.739854 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93882d24236584d6ce29851ba45d4e24-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"93882d24236584d6ce29851ba45d4e24\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:10.739874 kubelet[2222]: I0117 00:42:10.739877 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:10.740272 kubelet[2222]: I0117 00:42:10.739896 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:10.740272 kubelet[2222]: I0117 00:42:10.739923 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:10.740272 kubelet[2222]: I0117 00:42:10.739952 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:10.740272 kubelet[2222]: I0117 00:42:10.739978 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:42:10.740272 kubelet[2222]: I0117 00:42:10.740009 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:10.746080 kubelet[2222]: E0117 00:42:10.744077 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5de82490f9ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:42:10.300328398 +0000 UTC m=+1.387852201,LastTimestamp:2026-01-17 00:42:10.300328398 +0000 UTC m=+1.387852201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:42:10.864528 kubelet[2222]: I0117 00:42:10.862561 2222 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:42:10.864528 kubelet[2222]: E0117 00:42:10.863914 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 17 00:42:11.099129 kubelet[2222]: E0117 00:42:11.087911 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Jan 17 00:42:11.179145 kubelet[2222]: E0117 00:42:11.177175 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:11.179145 kubelet[2222]: E0117 00:42:11.178131 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:11.186341 kubelet[2222]: E0117 00:42:11.186307 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:11.197085 containerd[1476]: time="2026-01-17T00:42:11.187667230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:11.249640 containerd[1476]: time="2026-01-17T00:42:11.198832409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:11.249640 containerd[1476]: time="2026-01-17T00:42:11.226996750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:93882d24236584d6ce29851ba45d4e24,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:11.284242 kubelet[2222]: I0117 00:42:11.284075 2222 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:42:11.285567 kubelet[2222]: E0117 00:42:11.285384 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 17 00:42:11.591674 kubelet[2222]: E0117 00:42:11.591030 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:42:11.617915 kubelet[2222]: E0117 00:42:11.617677 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:42:11.816721 kubelet[2222]: E0117 00:42:11.815900 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:42:11.893029 kubelet[2222]: E0117 00:42:11.891771 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:42:11.905027 kubelet[2222]: E0117 00:42:11.904865 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Jan 17 00:42:12.094754 kubelet[2222]: I0117 00:42:12.093191 2222 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:42:12.097975 kubelet[2222]: E0117 00:42:12.097871 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 17 00:42:12.174584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091348201.mount: Deactivated successfully. Jan 17 00:42:12.187760 containerd[1476]: time="2026-01-17T00:42:12.187619052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:12.195039 containerd[1476]: time="2026-01-17T00:42:12.194727677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:42:12.200031 containerd[1476]: time="2026-01-17T00:42:12.198961548Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:12.202321 containerd[1476]: time="2026-01-17T00:42:12.202208278Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:12.204376 containerd[1476]: time="2026-01-17T00:42:12.204336770Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:12.205788 containerd[1476]: time="2026-01-17T00:42:12.205700994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:42:12.206902 containerd[1476]: time="2026-01-17T00:42:12.206825484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:42:12.236808 containerd[1476]: time="2026-01-17T00:42:12.235717377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:42:12.236808 containerd[1476]: time="2026-01-17T00:42:12.236340378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.008789569s" Jan 17 00:42:12.251680 containerd[1476]: time="2026-01-17T00:42:12.248838330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.033739996s" Jan 17 00:42:12.251680 containerd[1476]: time="2026-01-17T00:42:12.249745107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.03471962s" Jan 17 00:42:12.299822 kubelet[2222]: E0117 00:42:12.299634 2222 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:42:13.320441 containerd[1476]: time="2026-01-17T00:42:13.319799186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:13.320441 containerd[1476]: time="2026-01-17T00:42:13.320405574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:13.320441 containerd[1476]: time="2026-01-17T00:42:13.320637243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:13.325933 containerd[1476]: time="2026-01-17T00:42:13.325766768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:13.334646 containerd[1476]: time="2026-01-17T00:42:13.332784544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:13.334646 containerd[1476]: time="2026-01-17T00:42:13.332843135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:13.334646 containerd[1476]: time="2026-01-17T00:42:13.332857463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:13.334646 containerd[1476]: time="2026-01-17T00:42:13.332953634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:13.337756 containerd[1476]: time="2026-01-17T00:42:13.337550824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:13.337905 containerd[1476]: time="2026-01-17T00:42:13.337673766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:13.338155 containerd[1476]: time="2026-01-17T00:42:13.338013159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:13.338463 containerd[1476]: time="2026-01-17T00:42:13.338409099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:13.476597 systemd[1]: Started cri-containerd-9c0b57c43d70d785e2f26ecf639266e8f034fad1addcf65c39f9fd69e286e1a7.scope - libcontainer container 9c0b57c43d70d785e2f26ecf639266e8f034fad1addcf65c39f9fd69e286e1a7. Jan 17 00:42:13.480026 systemd[1]: Started cri-containerd-a03cafbb4b8b575188e0ab775b5798e504ad7f9d4d9448b5e39af44518f9db0a.scope - libcontainer container a03cafbb4b8b575188e0ab775b5798e504ad7f9d4d9448b5e39af44518f9db0a. Jan 17 00:42:13.483181 systemd[1]: Started cri-containerd-cde36c37d5c6b7cfbfb5df7de2f69817f93485c45849027276f053c62b0c5db2.scope - libcontainer container cde36c37d5c6b7cfbfb5df7de2f69817f93485c45849027276f053c62b0c5db2. Jan 17 00:42:13.506321 kubelet[2222]: E0117 00:42:13.506151 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="3.2s" Jan 17 00:42:13.708175 kubelet[2222]: I0117 00:42:13.708037 2222 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:42:13.708930 kubelet[2222]: E0117 00:42:13.708879 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 17 00:42:13.711496 containerd[1476]: time="2026-01-17T00:42:13.711307582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"cde36c37d5c6b7cfbfb5df7de2f69817f93485c45849027276f053c62b0c5db2\"" Jan 17 00:42:13.724195 kubelet[2222]: E0117 00:42:13.717495 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:13.728700 containerd[1476]: time="2026-01-17T00:42:13.728589384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:93882d24236584d6ce29851ba45d4e24,Namespace:kube-system,Attempt:0,} returns sandbox id \"a03cafbb4b8b575188e0ab775b5798e504ad7f9d4d9448b5e39af44518f9db0a\"" Jan 17 00:42:13.731895 kubelet[2222]: E0117 00:42:13.731845 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:13.739190 containerd[1476]: time="2026-01-17T00:42:13.739145586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c0b57c43d70d785e2f26ecf639266e8f034fad1addcf65c39f9fd69e286e1a7\"" Jan 17 00:42:13.740073 kubelet[2222]: E0117 00:42:13.740037 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:13.747327 containerd[1476]: time="2026-01-17T00:42:13.747244077Z" level=info msg="CreateContainer within sandbox \"cde36c37d5c6b7cfbfb5df7de2f69817f93485c45849027276f053c62b0c5db2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:42:13.759778 containerd[1476]: time="2026-01-17T00:42:13.759682507Z" level=info msg="CreateContainer within sandbox \"a03cafbb4b8b575188e0ab775b5798e504ad7f9d4d9448b5e39af44518f9db0a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:42:13.770429 containerd[1476]: time="2026-01-17T00:42:13.770195383Z" level=info msg="CreateContainer within sandbox \"9c0b57c43d70d785e2f26ecf639266e8f034fad1addcf65c39f9fd69e286e1a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:42:13.795751 containerd[1476]: time="2026-01-17T00:42:13.795620984Z" level=info msg="CreateContainer within sandbox \"cde36c37d5c6b7cfbfb5df7de2f69817f93485c45849027276f053c62b0c5db2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f\"" Jan 17 00:42:13.796826 containerd[1476]: time="2026-01-17T00:42:13.796741842Z" level=info msg="StartContainer for \"0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f\"" Jan 17 00:42:13.802814 containerd[1476]: time="2026-01-17T00:42:13.802757299Z" level=info msg="CreateContainer within sandbox \"a03cafbb4b8b575188e0ab775b5798e504ad7f9d4d9448b5e39af44518f9db0a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"98cc8d75fce99f4208d292ce29b00255c815956bed9d8a6b8cfacbe87bfdc401\"" Jan 17 00:42:13.803753 containerd[1476]: time="2026-01-17T00:42:13.803676831Z" level=info msg="StartContainer for \"98cc8d75fce99f4208d292ce29b00255c815956bed9d8a6b8cfacbe87bfdc401\"" Jan 17 00:42:13.807386 containerd[1476]: time="2026-01-17T00:42:13.807216233Z" level=info msg="CreateContainer within sandbox \"9c0b57c43d70d785e2f26ecf639266e8f034fad1addcf65c39f9fd69e286e1a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8\"" Jan 17 00:42:13.808182 containerd[1476]: time="2026-01-17T00:42:13.808140684Z" level=info msg="StartContainer for \"c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8\"" Jan 17 00:42:13.852333 systemd[1]: Started cri-containerd-0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f.scope - libcontainer container 0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f. Jan 17 00:42:13.854174 systemd[1]: Started cri-containerd-98cc8d75fce99f4208d292ce29b00255c815956bed9d8a6b8cfacbe87bfdc401.scope - libcontainer container 98cc8d75fce99f4208d292ce29b00255c815956bed9d8a6b8cfacbe87bfdc401. Jan 17 00:42:13.862979 systemd[1]: Started cri-containerd-c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8.scope - libcontainer container c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8. Jan 17 00:42:14.181438 containerd[1476]: time="2026-01-17T00:42:14.178932510Z" level=info msg="StartContainer for \"98cc8d75fce99f4208d292ce29b00255c815956bed9d8a6b8cfacbe87bfdc401\" returns successfully" Jan 17 00:42:14.188938 containerd[1476]: time="2026-01-17T00:42:14.188898941Z" level=info msg="StartContainer for \"c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8\" returns successfully" Jan 17 00:42:14.195315 containerd[1476]: time="2026-01-17T00:42:14.194953001Z" level=info msg="StartContainer for \"0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f\" returns successfully" Jan 17 00:42:14.197536 kubelet[2222]: E0117 00:42:14.197474 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:42:14.942383 kubelet[2222]: E0117 00:42:14.942183 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:42:14.949133 kubelet[2222]: E0117 00:42:14.942456 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:42:14.952174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176965700.mount: Deactivated successfully. Jan 17 00:42:14.999874 kubelet[2222]: E0117 00:42:14.999424 2222 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:42:15.009164 kubelet[2222]: E0117 00:42:15.008750 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:15.010418 kubelet[2222]: E0117 00:42:15.009708 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:15.010531 kubelet[2222]: E0117 00:42:15.010452 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:15.010728 kubelet[2222]: E0117 00:42:15.010660 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:15.015781 kubelet[2222]: E0117 00:42:15.015724 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:15.016987 kubelet[2222]: E0117 00:42:15.015967 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:16.031646 kubelet[2222]: E0117 00:42:16.030822 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:16.031646 kubelet[2222]: E0117 00:42:16.031487 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:16.031646 kubelet[2222]: E0117 00:42:16.032559 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:16.031646 kubelet[2222]: E0117 00:42:16.032693 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:16.922619 kubelet[2222]: I0117 00:42:16.919264 2222 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:42:17.756343 kubelet[2222]: E0117 00:42:17.755884 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:17.757907 kubelet[2222]: E0117 00:42:17.756651 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:20.226265 kubelet[2222]: E0117 00:42:20.225741 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:20.226265 kubelet[2222]: E0117 00:42:20.225991 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:20.584142 kubelet[2222]: E0117 00:42:20.579834 2222 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:42:20.801902 kubelet[2222]: E0117 00:42:20.790880 2222 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:42:20.801902 kubelet[2222]: E0117 00:42:20.791128 2222 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:21.137509 kubelet[2222]: E0117 00:42:21.136923 2222 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 00:42:21.305669 kubelet[2222]: E0117 00:42:21.300798 2222 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5de82490f9ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:42:10.300328398 +0000 UTC m=+1.387852201,LastTimestamp:2026-01-17 00:42:10.300328398 +0000 UTC m=+1.387852201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:42:21.349959 kubelet[2222]: I0117 00:42:21.349893 2222 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:42:21.476277 kubelet[2222]: I0117 00:42:21.476196 2222 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:21.515689 kubelet[2222]: E0117 00:42:21.474916 2222 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5de827df0570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:42:10.355774832 +0000 UTC m=+1.443298696,LastTimestamp:2026-01-17 00:42:10.355774832 +0000 UTC m=+1.443298696,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:42:21.607731 kubelet[2222]: E0117 00:42:21.607613 2222 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:21.608074 kubelet[2222]: I0117 00:42:21.607811 2222 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:21.610529 kubelet[2222]: E0117 00:42:21.610463 2222 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:21.610529 kubelet[2222]: I0117 00:42:21.610503 2222 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:42:21.614619 kubelet[2222]: E0117 00:42:21.613079 2222 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:42:22.002204 kubelet[2222]: I0117 00:42:22.001791 2222 apiserver.go:52] "Watching apiserver" Jan 17 00:42:22.029074 kubelet[2222]: I0117 00:42:22.028967 2222 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:42:24.987792 systemd[1]: Reloading requested from client PID 2513 ('systemctl') (unit session-7.scope)... Jan 17 00:42:24.987832 systemd[1]: Reloading... Jan 17 00:42:25.194197 zram_generator::config[2555]: No configuration found. Jan 17 00:42:25.478742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:42:25.731340 systemd[1]: Reloading finished in 741 ms. Jan 17 00:42:25.892503 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:25.917633 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:42:25.918053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:25.918502 systemd[1]: kubelet.service: Consumed 5.326s CPU time, 130.9M memory peak, 0B memory swap peak. Jan 17 00:42:25.934713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:26.247671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:26.256510 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:42:26.473360 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:42:26.473360 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:42:26.475681 kubelet[2596]: I0117 00:42:26.474421 2596 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:42:26.497266 kubelet[2596]: I0117 00:42:26.495263 2596 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:42:26.497266 kubelet[2596]: I0117 00:42:26.495300 2596 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:42:26.497266 kubelet[2596]: I0117 00:42:26.495343 2596 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:42:26.497266 kubelet[2596]: I0117 00:42:26.495357 2596 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:42:26.497266 kubelet[2596]: I0117 00:42:26.495685 2596 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:42:26.498542 kubelet[2596]: I0117 00:42:26.497748 2596 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:42:26.501675 kubelet[2596]: I0117 00:42:26.501514 2596 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:42:26.510862 kubelet[2596]: E0117 00:42:26.510782 2596 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:42:26.515050 kubelet[2596]: I0117 00:42:26.511687 2596 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:42:26.529657 kubelet[2596]: I0117 00:42:26.529609 2596 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:42:26.531709 kubelet[2596]: I0117 00:42:26.531618 2596 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:42:26.531942 kubelet[2596]: I0117 00:42:26.531683 2596 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:42:26.531942 kubelet[2596]: I0117 00:42:26.531864 2596 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:42:26.531942 kubelet[2596]: I0117 00:42:26.531878 2596 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:42:26.532350 kubelet[2596]: I0117 00:42:26.531951 2596 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:42:26.533141 kubelet[2596]: I0117 00:42:26.532854 2596 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:42:26.535084 kubelet[2596]: I0117 00:42:26.534332 2596 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:42:26.535084 kubelet[2596]: I0117 00:42:26.534354 2596 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:42:26.535084 kubelet[2596]: I0117 00:42:26.534383 2596 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:42:26.535084 kubelet[2596]: I0117 00:42:26.534405 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:42:26.542133 kubelet[2596]: I0117 00:42:26.539364 2596 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:42:26.542133 kubelet[2596]: I0117 00:42:26.540036 2596 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:42:26.542133 kubelet[2596]: I0117 00:42:26.540072 2596 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:42:26.546836 kubelet[2596]: I0117 00:42:26.546814 2596 server.go:1262] "Started kubelet" Jan 17 00:42:26.549257 kubelet[2596]: I0117 00:42:26.549225 2596 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:42:26.550732 kubelet[2596]: I0117 00:42:26.550624 2596 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:42:26.551985 kubelet[2596]: I0117 00:42:26.551893 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:42:26.568161 kubelet[2596]: I0117 00:42:26.568058 2596 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:42:26.572538 kubelet[2596]: I0117 00:42:26.572493 2596 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:42:26.572719 kubelet[2596]: I0117 00:42:26.572700 2596 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:42:26.573076 kubelet[2596]: I0117 00:42:26.573018 2596 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:42:26.573485 kubelet[2596]: I0117 00:42:26.573466 2596 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:42:26.577340 kubelet[2596]: I0117 00:42:26.576887 2596 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:42:26.577340 kubelet[2596]: I0117 00:42:26.577211 2596 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:42:26.584480 kubelet[2596]: I0117 00:42:26.584013 2596 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:42:26.584480 kubelet[2596]: I0117 00:42:26.584207 2596 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:42:26.597236 kubelet[2596]: I0117 00:42:26.596524 2596 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:42:26.606141 kubelet[2596]: E0117 00:42:26.602627 2596 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:42:26.626841 kubelet[2596]: I0117 00:42:26.626757 2596 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:42:26.637054 kubelet[2596]: I0117 00:42:26.635855 2596 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:42:26.637054 kubelet[2596]: I0117 00:42:26.635885 2596 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:42:26.638183 kubelet[2596]: I0117 00:42:26.637840 2596 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:42:26.638353 kubelet[2596]: E0117 00:42:26.638329 2596 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:42:26.682665 kubelet[2596]: I0117 00:42:26.682633 2596 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:42:26.684828 kubelet[2596]: I0117 00:42:26.684741 2596 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:42:26.685039 kubelet[2596]: I0117 00:42:26.685026 2596 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:42:26.685312 kubelet[2596]: I0117 00:42:26.685294 2596 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:42:26.685408 kubelet[2596]: I0117 00:42:26.685382 2596 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:42:26.685598 kubelet[2596]: I0117 00:42:26.685581 2596 policy_none.go:49] "None policy: Start" Jan 17 00:42:26.685674 kubelet[2596]: I0117 00:42:26.685663 2596 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:42:26.685731 kubelet[2596]: I0117 00:42:26.685720 2596 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:42:26.685948 kubelet[2596]: I0117 00:42:26.685883 2596 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:42:26.686024 kubelet[2596]: I0117 00:42:26.686014 2596 policy_none.go:47] "Start" Jan 17 00:42:26.695667 kubelet[2596]: E0117 00:42:26.695607 2596 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:42:26.695884 kubelet[2596]: I0117 00:42:26.695838 2596 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:42:26.696014 kubelet[2596]: I0117 00:42:26.695873 2596 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:42:26.696627 kubelet[2596]: I0117 00:42:26.696570 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:42:26.702735 kubelet[2596]: E0117 00:42:26.700855 2596 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:42:26.740355 kubelet[2596]: I0117 00:42:26.740271 2596 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:42:26.743297 kubelet[2596]: I0117 00:42:26.741432 2596 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:26.743297 kubelet[2596]: I0117 00:42:26.743191 2596 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:26.783247 kubelet[2596]: I0117 00:42:26.783011 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93882d24236584d6ce29851ba45d4e24-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"93882d24236584d6ce29851ba45d4e24\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:26.783247 kubelet[2596]: I0117 00:42:26.783077 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93882d24236584d6ce29851ba45d4e24-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"93882d24236584d6ce29851ba45d4e24\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:26.783247 kubelet[2596]: I0117 00:42:26.783144 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:26.783247 kubelet[2596]: I0117 00:42:26.783164 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:26.783247 kubelet[2596]: I0117 00:42:26.783182 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:26.783571 kubelet[2596]: I0117 00:42:26.783196 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:26.783571 kubelet[2596]: I0117 00:42:26.783212 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:42:26.783571 kubelet[2596]: I0117 00:42:26.783229 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93882d24236584d6ce29851ba45d4e24-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"93882d24236584d6ce29851ba45d4e24\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:42:26.783571 kubelet[2596]: I0117 00:42:26.783247 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:42:26.808038 kubelet[2596]: I0117 00:42:26.807619 2596 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:42:26.830988 kubelet[2596]: I0117 00:42:26.830948 2596 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:42:26.833150 kubelet[2596]: I0117 00:42:26.832236 2596 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:42:27.073518 kubelet[2596]: E0117 00:42:27.071606 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:27.089941 kubelet[2596]: E0117 00:42:27.089765 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:27.092363 kubelet[2596]: E0117 00:42:27.090811 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:27.541950 kubelet[2596]: I0117 00:42:27.538477 2596 apiserver.go:52] "Watching apiserver" Jan 17 00:42:27.589459 kubelet[2596]: I0117 00:42:27.588195 2596 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:42:27.670542 kubelet[2596]: E0117 00:42:27.670477 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:27.674142 kubelet[2596]: I0117 00:42:27.671374 2596 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:42:27.674142 kubelet[2596]: E0117 00:42:27.672448 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:27.717185 kubelet[2596]: E0117 00:42:27.717078 2596 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:42:27.717658 kubelet[2596]: E0117 00:42:27.717563 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:27.889470 kubelet[2596]: I0117 00:42:27.882660 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.854962498 podStartE2EDuration="1.854962498s" podCreationTimestamp="2026-01-17 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:42:27.745662636 +0000 UTC m=+1.461155012" watchObservedRunningTime="2026-01-17 00:42:27.854962498 +0000 UTC m=+1.570454864" Jan 17 00:42:27.972058 kubelet[2596]: I0117 00:42:27.971250 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.97104298 podStartE2EDuration="1.97104298s" podCreationTimestamp="2026-01-17 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:42:27.888355933 +0000 UTC m=+1.603848290" watchObservedRunningTime="2026-01-17 00:42:27.97104298 +0000 UTC m=+1.686535336" Jan 17 00:42:28.014525 kubelet[2596]: I0117 00:42:28.013038 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.01301594 podStartE2EDuration="2.01301594s" podCreationTimestamp="2026-01-17 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:42:27.975650884 +0000 UTC m=+1.691143269" watchObservedRunningTime="2026-01-17 00:42:28.01301594 +0000 UTC m=+1.728508306" Jan 17 00:42:28.677131 kubelet[2596]: E0117 00:42:28.676976 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:28.679193 kubelet[2596]: E0117 00:42:28.679148 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:28.679975 kubelet[2596]: E0117 00:42:28.679932 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:29.693510 kubelet[2596]: E0117 00:42:29.693239 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:29.694977 kubelet[2596]: E0117 00:42:29.694858 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:30.705511 kubelet[2596]: E0117 00:42:30.702004 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:30.822983 kubelet[2596]: I0117 00:42:30.817298 2596 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:42:30.822983 kubelet[2596]: I0117 00:42:30.821014 2596 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:42:30.823314 containerd[1476]: time="2026-01-17T00:42:30.820607712Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:42:31.778551 systemd[1]: Created slice kubepods-besteffort-pod34f3deb8_6161_4886_83e6_4cea2ad199dd.slice - libcontainer container kubepods-besteffort-pod34f3deb8_6161_4886_83e6_4cea2ad199dd.slice. Jan 17 00:42:31.814333 kubelet[2596]: I0117 00:42:31.814260 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2k8m\" (UniqueName: \"kubernetes.io/projected/34f3deb8-6161-4886-83e6-4cea2ad199dd-kube-api-access-v2k8m\") pod \"kube-proxy-clczj\" (UID: \"34f3deb8-6161-4886-83e6-4cea2ad199dd\") " pod="kube-system/kube-proxy-clczj" Jan 17 00:42:31.814333 kubelet[2596]: I0117 00:42:31.814340 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34f3deb8-6161-4886-83e6-4cea2ad199dd-xtables-lock\") pod \"kube-proxy-clczj\" (UID: \"34f3deb8-6161-4886-83e6-4cea2ad199dd\") " pod="kube-system/kube-proxy-clczj" Jan 17 00:42:31.814333 kubelet[2596]: I0117 00:42:31.814381 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34f3deb8-6161-4886-83e6-4cea2ad199dd-kube-proxy\") pod \"kube-proxy-clczj\" (UID: \"34f3deb8-6161-4886-83e6-4cea2ad199dd\") " pod="kube-system/kube-proxy-clczj" Jan 17 00:42:31.815150 kubelet[2596]: I0117 00:42:31.814408 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34f3deb8-6161-4886-83e6-4cea2ad199dd-lib-modules\") pod \"kube-proxy-clczj\" (UID: \"34f3deb8-6161-4886-83e6-4cea2ad199dd\") " pod="kube-system/kube-proxy-clczj" Jan 17 00:42:31.887926 systemd[1]: Created slice kubepods-besteffort-pod4b21805a_374f_4ab8_b07e_e98744d30303.slice - libcontainer container kubepods-besteffort-pod4b21805a_374f_4ab8_b07e_e98744d30303.slice. Jan 17 00:42:32.021187 kubelet[2596]: I0117 00:42:32.020892 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b21805a-374f-4ab8-b07e-e98744d30303-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-sjl6v\" (UID: \"4b21805a-374f-4ab8-b07e-e98744d30303\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-sjl6v" Jan 17 00:42:32.021187 kubelet[2596]: I0117 00:42:32.020978 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74sqg\" (UniqueName: \"kubernetes.io/projected/4b21805a-374f-4ab8-b07e-e98744d30303-kube-api-access-74sqg\") pod \"tigera-operator-65cdcdfd6d-sjl6v\" (UID: \"4b21805a-374f-4ab8-b07e-e98744d30303\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-sjl6v" Jan 17 00:42:32.152944 kubelet[2596]: E0117 00:42:32.148756 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:32.159009 containerd[1476]: time="2026-01-17T00:42:32.158251880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clczj,Uid:34f3deb8-6161-4886-83e6-4cea2ad199dd,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:32.213224 containerd[1476]: time="2026-01-17T00:42:32.211861917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-sjl6v,Uid:4b21805a-374f-4ab8-b07e-e98744d30303,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:42:32.343172 containerd[1476]: time="2026-01-17T00:42:32.340694628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:32.343172 containerd[1476]: time="2026-01-17T00:42:32.341211133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:32.347143 containerd[1476]: time="2026-01-17T00:42:32.341419406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:32.347143 containerd[1476]: time="2026-01-17T00:42:32.345026721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:32.394748 containerd[1476]: time="2026-01-17T00:42:32.394288175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:32.394748 containerd[1476]: time="2026-01-17T00:42:32.394408971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:32.395584 containerd[1476]: time="2026-01-17T00:42:32.395157479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:32.395584 containerd[1476]: time="2026-01-17T00:42:32.395285249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:32.401397 systemd[1]: Started cri-containerd-2e928e0722c8c4b491cdb34afb06efa023a4691b3ecf6b9076c0e423d621cb67.scope - libcontainer container 2e928e0722c8c4b491cdb34afb06efa023a4691b3ecf6b9076c0e423d621cb67. Jan 17 00:42:32.480325 systemd[1]: Started cri-containerd-4ef47ae8812d086c555d092e8659c83fe8c23ce63d3096c06999ca231954d187.scope - libcontainer container 4ef47ae8812d086c555d092e8659c83fe8c23ce63d3096c06999ca231954d187. Jan 17 00:42:32.516438 containerd[1476]: time="2026-01-17T00:42:32.515705185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clczj,Uid:34f3deb8-6161-4886-83e6-4cea2ad199dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e928e0722c8c4b491cdb34afb06efa023a4691b3ecf6b9076c0e423d621cb67\"" Jan 17 00:42:32.533756 kubelet[2596]: E0117 00:42:32.533683 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:32.616296 containerd[1476]: time="2026-01-17T00:42:32.615561406Z" level=info msg="CreateContainer within sandbox \"2e928e0722c8c4b491cdb34afb06efa023a4691b3ecf6b9076c0e423d621cb67\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:42:32.669964 containerd[1476]: time="2026-01-17T00:42:32.669817474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-sjl6v,Uid:4b21805a-374f-4ab8-b07e-e98744d30303,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ef47ae8812d086c555d092e8659c83fe8c23ce63d3096c06999ca231954d187\"" Jan 17 00:42:32.673644 containerd[1476]: time="2026-01-17T00:42:32.672820266Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:42:32.692152 containerd[1476]: time="2026-01-17T00:42:32.691998778Z" level=info msg="CreateContainer within sandbox \"2e928e0722c8c4b491cdb34afb06efa023a4691b3ecf6b9076c0e423d621cb67\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"337ae4ff35fc0e36a05c01ae3fc96fdd857094498ff6a897b6d3c89556635bdf\"" Jan 17 00:42:32.694561 containerd[1476]: time="2026-01-17T00:42:32.692939637Z" level=info msg="StartContainer for \"337ae4ff35fc0e36a05c01ae3fc96fdd857094498ff6a897b6d3c89556635bdf\"" Jan 17 00:42:32.781513 systemd[1]: Started cri-containerd-337ae4ff35fc0e36a05c01ae3fc96fdd857094498ff6a897b6d3c89556635bdf.scope - libcontainer container 337ae4ff35fc0e36a05c01ae3fc96fdd857094498ff6a897b6d3c89556635bdf. Jan 17 00:42:32.907714 containerd[1476]: time="2026-01-17T00:42:32.901285240Z" level=info msg="StartContainer for \"337ae4ff35fc0e36a05c01ae3fc96fdd857094498ff6a897b6d3c89556635bdf\" returns successfully" Jan 17 00:42:33.746770 kubelet[2596]: E0117 00:42:33.746456 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:34.123832 kubelet[2596]: E0117 00:42:34.115908 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:34.219246 kubelet[2596]: I0117 00:42:34.218801 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clczj" podStartSLOduration=3.218776213 podStartE2EDuration="3.218776213s" podCreationTimestamp="2026-01-17 00:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:42:33.793764736 +0000 UTC m=+7.509257123" watchObservedRunningTime="2026-01-17 00:42:34.218776213 +0000 UTC m=+7.934268570" Jan 17 00:42:34.752075 kubelet[2596]: E0117 00:42:34.751394 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:34.757532 kubelet[2596]: E0117 00:42:34.752247 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:34.848573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402978268.mount: Deactivated successfully. Jan 17 00:42:38.940911 containerd[1476]: time="2026-01-17T00:42:38.936629846Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:38.980578 containerd[1476]: time="2026-01-17T00:42:38.975409214Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:42:39.004855 containerd[1476]: time="2026-01-17T00:42:38.983063521Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:39.055527 containerd[1476]: time="2026-01-17T00:42:39.054970839Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:39.055527 containerd[1476]: time="2026-01-17T00:42:39.055328284Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.382476181s" Jan 17 00:42:39.055527 containerd[1476]: time="2026-01-17T00:42:39.055378695Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:42:39.375131 containerd[1476]: time="2026-01-17T00:42:39.351851856Z" level=info msg="CreateContainer within sandbox \"4ef47ae8812d086c555d092e8659c83fe8c23ce63d3096c06999ca231954d187\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:42:43.802610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1130136407.mount: Deactivated successfully. Jan 17 00:42:43.826820 containerd[1476]: time="2026-01-17T00:42:43.826655344Z" level=info msg="CreateContainer within sandbox \"4ef47ae8812d086c555d092e8659c83fe8c23ce63d3096c06999ca231954d187\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb\"" Jan 17 00:42:43.834401 containerd[1476]: time="2026-01-17T00:42:43.834303170Z" level=info msg="StartContainer for \"8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb\"" Jan 17 00:42:44.028868 systemd[1]: Started cri-containerd-8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb.scope - libcontainer container 8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb. Jan 17 00:42:44.130786 containerd[1476]: time="2026-01-17T00:42:44.128435179Z" level=info msg="StartContainer for \"8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb\" returns successfully" Jan 17 00:42:53.718887 sudo[1644]: pam_unix(sudo:session): session closed for user root Jan 17 00:42:53.733025 sshd[1641]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:53.747578 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:57004.service: Deactivated successfully. Jan 17 00:42:53.759937 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:42:53.760666 systemd[1]: session-7.scope: Consumed 16.697s CPU time, 164.2M memory peak, 0B memory swap peak. Jan 17 00:42:53.763082 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:42:53.768981 systemd-logind[1459]: Removed session 7. Jan 17 00:43:05.273139 kubelet[2596]: I0117 00:43:05.271671 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-sjl6v" podStartSLOduration=27.818423007 podStartE2EDuration="34.271650622s" podCreationTimestamp="2026-01-17 00:42:31 +0000 UTC" firstStartedPulling="2026-01-17 00:42:32.671841058 +0000 UTC m=+6.387333414" lastFinishedPulling="2026-01-17 00:42:39.125068653 +0000 UTC m=+12.840561029" observedRunningTime="2026-01-17 00:42:44.968326734 +0000 UTC m=+18.683819140" watchObservedRunningTime="2026-01-17 00:43:05.271650622 +0000 UTC m=+38.987142997" Jan 17 00:43:05.298262 systemd[1]: Created slice kubepods-besteffort-pode5d17ae5_2c00_47af_bd3d_e13401fb9f78.slice - libcontainer container kubepods-besteffort-pode5d17ae5_2c00_47af_bd3d_e13401fb9f78.slice. Jan 17 00:43:05.304169 kubelet[2596]: I0117 00:43:05.299422 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5d17ae5-2c00-47af-bd3d-e13401fb9f78-tigera-ca-bundle\") pod \"calico-typha-78c7bc8f4-v8rzj\" (UID: \"e5d17ae5-2c00-47af-bd3d-e13401fb9f78\") " pod="calico-system/calico-typha-78c7bc8f4-v8rzj" Jan 17 00:43:05.304169 kubelet[2596]: I0117 00:43:05.299483 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5d17ae5-2c00-47af-bd3d-e13401fb9f78-typha-certs\") pod \"calico-typha-78c7bc8f4-v8rzj\" (UID: \"e5d17ae5-2c00-47af-bd3d-e13401fb9f78\") " pod="calico-system/calico-typha-78c7bc8f4-v8rzj" Jan 17 00:43:05.304169 kubelet[2596]: I0117 00:43:05.299514 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhw7h\" (UniqueName: \"kubernetes.io/projected/e5d17ae5-2c00-47af-bd3d-e13401fb9f78-kube-api-access-fhw7h\") pod \"calico-typha-78c7bc8f4-v8rzj\" (UID: \"e5d17ae5-2c00-47af-bd3d-e13401fb9f78\") " pod="calico-system/calico-typha-78c7bc8f4-v8rzj" Jan 17 00:43:05.638880 kubelet[2596]: E0117 00:43:05.638629 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:05.644867 containerd[1476]: time="2026-01-17T00:43:05.639900935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c7bc8f4-v8rzj,Uid:e5d17ae5-2c00-47af-bd3d-e13401fb9f78,Namespace:calico-system,Attempt:0,}" Jan 17 00:43:05.668901 systemd[1]: Created slice kubepods-besteffort-pod3ed1b03d_96a0_40ab_a4ce_f6d85f63d53e.slice - libcontainer container kubepods-besteffort-pod3ed1b03d_96a0_40ab_a4ce_f6d85f63d53e.slice. Jan 17 00:43:05.706608 kubelet[2596]: I0117 00:43:05.706463 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ssr\" (UniqueName: \"kubernetes.io/projected/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-kube-api-access-v2ssr\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.706608 kubelet[2596]: I0117 00:43:05.706560 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-node-certs\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.706608 kubelet[2596]: I0117 00:43:05.706596 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-var-lib-calico\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710008 kubelet[2596]: I0117 00:43:05.706625 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-tigera-ca-bundle\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710008 kubelet[2596]: I0117 00:43:05.706650 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-cni-net-dir\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710008 kubelet[2596]: I0117 00:43:05.706670 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-policysync\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710008 kubelet[2596]: I0117 00:43:05.706688 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-lib-modules\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710008 kubelet[2596]: I0117 00:43:05.706709 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-var-run-calico\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710306 kubelet[2596]: I0117 00:43:05.706735 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-flexvol-driver-host\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710306 kubelet[2596]: I0117 00:43:05.706756 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-cni-bin-dir\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710306 kubelet[2596]: I0117 00:43:05.706773 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-cni-log-dir\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.710306 kubelet[2596]: I0117 00:43:05.706794 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e-xtables-lock\") pod \"calico-node-72g2h\" (UID: \"3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e\") " pod="calico-system/calico-node-72g2h" Jan 17 00:43:05.819199 kubelet[2596]: E0117 00:43:05.816475 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.819199 kubelet[2596]: W0117 00:43:05.816521 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.819199 kubelet[2596]: E0117 00:43:05.816549 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.819199 kubelet[2596]: E0117 00:43:05.817462 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:05.819838 kubelet[2596]: E0117 00:43:05.819811 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.819838 kubelet[2596]: W0117 00:43:05.819828 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.819949 kubelet[2596]: E0117 00:43:05.819843 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.822318 kubelet[2596]: E0117 00:43:05.821816 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.822318 kubelet[2596]: W0117 00:43:05.821831 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.822318 kubelet[2596]: E0117 00:43:05.821845 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.825345 kubelet[2596]: E0117 00:43:05.825250 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.825345 kubelet[2596]: W0117 00:43:05.825277 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.825345 kubelet[2596]: E0117 00:43:05.825293 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.825978 kubelet[2596]: E0117 00:43:05.825958 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.825978 kubelet[2596]: W0117 00:43:05.825972 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.826248 kubelet[2596]: E0117 00:43:05.825985 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.827331 kubelet[2596]: E0117 00:43:05.827193 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.827331 kubelet[2596]: W0117 00:43:05.827221 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.827331 kubelet[2596]: E0117 00:43:05.827234 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.840226 kubelet[2596]: E0117 00:43:05.835179 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.840226 kubelet[2596]: W0117 00:43:05.835199 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.840226 kubelet[2596]: E0117 00:43:05.835220 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.840226 kubelet[2596]: E0117 00:43:05.837449 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.840226 kubelet[2596]: W0117 00:43:05.837463 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.840226 kubelet[2596]: E0117 00:43:05.837480 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.840226 kubelet[2596]: E0117 00:43:05.838761 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.840226 kubelet[2596]: W0117 00:43:05.838774 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.840226 kubelet[2596]: E0117 00:43:05.838789 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.841475 kubelet[2596]: E0117 00:43:05.841432 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.841475 kubelet[2596]: W0117 00:43:05.841465 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.841583 kubelet[2596]: E0117 00:43:05.841484 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.852470 kubelet[2596]: E0117 00:43:05.850238 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.852470 kubelet[2596]: W0117 00:43:05.850278 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.852470 kubelet[2596]: E0117 00:43:05.850304 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.852689 kubelet[2596]: E0117 00:43:05.852500 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.852689 kubelet[2596]: W0117 00:43:05.852517 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.852689 kubelet[2596]: E0117 00:43:05.852537 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.853759 containerd[1476]: time="2026-01-17T00:43:05.851422003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:05.854672 kubelet[2596]: E0117 00:43:05.854483 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.854672 kubelet[2596]: W0117 00:43:05.854504 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.854672 kubelet[2596]: E0117 00:43:05.854526 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.858456 kubelet[2596]: E0117 00:43:05.855813 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.858456 kubelet[2596]: W0117 00:43:05.855990 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.858456 kubelet[2596]: E0117 00:43:05.856008 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.863409 kubelet[2596]: E0117 00:43:05.862259 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.863409 kubelet[2596]: W0117 00:43:05.862282 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.863409 kubelet[2596]: E0117 00:43:05.862305 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.863409 kubelet[2596]: E0117 00:43:05.862708 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.863409 kubelet[2596]: W0117 00:43:05.862721 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.863409 kubelet[2596]: E0117 00:43:05.862745 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.863409 kubelet[2596]: E0117 00:43:05.863128 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.863409 kubelet[2596]: W0117 00:43:05.863140 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.863409 kubelet[2596]: E0117 00:43:05.863157 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.867374 kubelet[2596]: E0117 00:43:05.866197 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.867374 kubelet[2596]: W0117 00:43:05.866227 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.867374 kubelet[2596]: E0117 00:43:05.866244 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.876387 containerd[1476]: time="2026-01-17T00:43:05.859621146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:05.876387 containerd[1476]: time="2026-01-17T00:43:05.859664657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:05.876387 containerd[1476]: time="2026-01-17T00:43:05.860265726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:05.876497 kubelet[2596]: E0117 00:43:05.870636 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.876497 kubelet[2596]: W0117 00:43:05.870651 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.876497 kubelet[2596]: E0117 00:43:05.870671 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.876497 kubelet[2596]: E0117 00:43:05.871728 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.876497 kubelet[2596]: W0117 00:43:05.871741 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.876497 kubelet[2596]: E0117 00:43:05.871757 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.876497 kubelet[2596]: E0117 00:43:05.873226 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.876497 kubelet[2596]: W0117 00:43:05.873240 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.876497 kubelet[2596]: E0117 00:43:05.873257 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.876497 kubelet[2596]: E0117 00:43:05.873568 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.876891 kubelet[2596]: W0117 00:43:05.873580 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.876891 kubelet[2596]: E0117 00:43:05.873593 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.876891 kubelet[2596]: E0117 00:43:05.874216 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.876891 kubelet[2596]: W0117 00:43:05.874228 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.876891 kubelet[2596]: E0117 00:43:05.874241 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.880401 kubelet[2596]: E0117 00:43:05.878458 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.880401 kubelet[2596]: W0117 00:43:05.878544 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.880401 kubelet[2596]: E0117 00:43:05.878562 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.880401 kubelet[2596]: E0117 00:43:05.879286 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.880401 kubelet[2596]: W0117 00:43:05.879297 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.880401 kubelet[2596]: E0117 00:43:05.879369 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.880401 kubelet[2596]: E0117 00:43:05.880019 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.880401 kubelet[2596]: W0117 00:43:05.880031 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.880401 kubelet[2596]: E0117 00:43:05.880076 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.887310 kubelet[2596]: E0117 00:43:05.881019 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.887310 kubelet[2596]: W0117 00:43:05.881031 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.887310 kubelet[2596]: E0117 00:43:05.881079 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.887310 kubelet[2596]: E0117 00:43:05.885422 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.887310 kubelet[2596]: W0117 00:43:05.885440 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.887310 kubelet[2596]: E0117 00:43:05.885506 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.888566 kubelet[2596]: E0117 00:43:05.888541 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.888566 kubelet[2596]: W0117 00:43:05.888557 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.888674 kubelet[2596]: E0117 00:43:05.888573 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.890007 kubelet[2596]: E0117 00:43:05.889336 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.890007 kubelet[2596]: W0117 00:43:05.889351 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.890007 kubelet[2596]: E0117 00:43:05.889365 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.890007 kubelet[2596]: E0117 00:43:05.889731 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.890007 kubelet[2596]: W0117 00:43:05.889742 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.890007 kubelet[2596]: E0117 00:43:05.889755 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.894389 kubelet[2596]: E0117 00:43:05.893777 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.894389 kubelet[2596]: W0117 00:43:05.893792 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.894389 kubelet[2596]: E0117 00:43:05.893808 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.898659 kubelet[2596]: E0117 00:43:05.898277 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.898659 kubelet[2596]: W0117 00:43:05.898311 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.898659 kubelet[2596]: E0117 00:43:05.898330 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.899642 kubelet[2596]: E0117 00:43:05.898852 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.899642 kubelet[2596]: W0117 00:43:05.898863 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.899642 kubelet[2596]: E0117 00:43:05.898876 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.899846 kubelet[2596]: E0117 00:43:05.899784 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.899846 kubelet[2596]: W0117 00:43:05.899818 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.899846 kubelet[2596]: E0117 00:43:05.899832 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.901078 kubelet[2596]: E0117 00:43:05.900778 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.901078 kubelet[2596]: W0117 00:43:05.900853 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.901078 kubelet[2596]: E0117 00:43:05.900874 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.901816 kubelet[2596]: E0117 00:43:05.901705 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.901816 kubelet[2596]: W0117 00:43:05.901724 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.901816 kubelet[2596]: E0117 00:43:05.901739 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.915672 kubelet[2596]: E0117 00:43:05.915390 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.915672 kubelet[2596]: W0117 00:43:05.915413 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.915672 kubelet[2596]: E0117 00:43:05.915438 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.916554 kubelet[2596]: E0117 00:43:05.916348 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.916554 kubelet[2596]: W0117 00:43:05.916366 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.916554 kubelet[2596]: E0117 00:43:05.916421 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.919609 kubelet[2596]: E0117 00:43:05.918619 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.919609 kubelet[2596]: W0117 00:43:05.918693 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.919609 kubelet[2596]: E0117 00:43:05.918714 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.919609 kubelet[2596]: E0117 00:43:05.919471 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.919609 kubelet[2596]: W0117 00:43:05.919483 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.919609 kubelet[2596]: E0117 00:43:05.919497 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.921556 kubelet[2596]: E0117 00:43:05.920169 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.921556 kubelet[2596]: W0117 00:43:05.920184 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.921556 kubelet[2596]: E0117 00:43:05.920200 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.923355 kubelet[2596]: E0117 00:43:05.923326 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.923599 kubelet[2596]: W0117 00:43:05.923448 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.923599 kubelet[2596]: E0117 00:43:05.923471 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.926362 kubelet[2596]: E0117 00:43:05.924409 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.926460 kubelet[2596]: W0117 00:43:05.926441 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.928391 kubelet[2596]: E0117 00:43:05.926529 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.928391 kubelet[2596]: E0117 00:43:05.928190 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.928391 kubelet[2596]: W0117 00:43:05.928203 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.928391 kubelet[2596]: E0117 00:43:05.928218 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.930428 kubelet[2596]: E0117 00:43:05.930409 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.932152 kubelet[2596]: W0117 00:43:05.931155 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.932152 kubelet[2596]: E0117 00:43:05.931176 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.932152 kubelet[2596]: E0117 00:43:05.931641 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.932152 kubelet[2596]: W0117 00:43:05.931654 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.932152 kubelet[2596]: E0117 00:43:05.931670 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.932152 kubelet[2596]: E0117 00:43:05.931998 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.932152 kubelet[2596]: W0117 00:43:05.932009 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.932152 kubelet[2596]: E0117 00:43:05.932020 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.932480 kubelet[2596]: E0117 00:43:05.932422 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.932480 kubelet[2596]: W0117 00:43:05.932433 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.932480 kubelet[2596]: E0117 00:43:05.932444 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.935760 kubelet[2596]: E0117 00:43:05.935207 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.935760 kubelet[2596]: W0117 00:43:05.935233 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.935760 kubelet[2596]: E0117 00:43:05.935248 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.935760 kubelet[2596]: E0117 00:43:05.935707 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.935760 kubelet[2596]: W0117 00:43:05.935720 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.935760 kubelet[2596]: E0117 00:43:05.935732 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.936305 kubelet[2596]: E0117 00:43:05.936193 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.936305 kubelet[2596]: W0117 00:43:05.936205 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.936305 kubelet[2596]: E0117 00:43:05.936218 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.936850 kubelet[2596]: E0117 00:43:05.936738 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.936850 kubelet[2596]: W0117 00:43:05.936754 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.936850 kubelet[2596]: E0117 00:43:05.936766 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.937504 kubelet[2596]: E0117 00:43:05.937401 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.937504 kubelet[2596]: W0117 00:43:05.937416 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.937504 kubelet[2596]: E0117 00:43:05.937429 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.937806 kubelet[2596]: E0117 00:43:05.937758 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.937806 kubelet[2596]: W0117 00:43:05.937772 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.937806 kubelet[2596]: E0117 00:43:05.937782 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.941434 kubelet[2596]: E0117 00:43:05.941253 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.941434 kubelet[2596]: W0117 00:43:05.941284 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.941434 kubelet[2596]: E0117 00:43:05.941302 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.943919 kubelet[2596]: E0117 00:43:05.942954 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.943919 kubelet[2596]: W0117 00:43:05.942967 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.943919 kubelet[2596]: E0117 00:43:05.942980 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.945654 kubelet[2596]: E0117 00:43:05.944300 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.945654 kubelet[2596]: W0117 00:43:05.944312 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.945654 kubelet[2596]: E0117 00:43:05.944384 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.946425 kubelet[2596]: E0117 00:43:05.945746 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.946425 kubelet[2596]: W0117 00:43:05.945758 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.946425 kubelet[2596]: E0117 00:43:05.945967 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.947706 kubelet[2596]: E0117 00:43:05.946593 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.947706 kubelet[2596]: W0117 00:43:05.946607 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.947706 kubelet[2596]: E0117 00:43:05.946619 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.948869 kubelet[2596]: E0117 00:43:05.948827 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.948869 kubelet[2596]: W0117 00:43:05.948856 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.948869 kubelet[2596]: E0117 00:43:05.948869 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.949720 kubelet[2596]: E0117 00:43:05.949667 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.949720 kubelet[2596]: W0117 00:43:05.949695 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.949720 kubelet[2596]: E0117 00:43:05.949708 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.950451 kubelet[2596]: E0117 00:43:05.950355 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.950451 kubelet[2596]: W0117 00:43:05.950385 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.950451 kubelet[2596]: E0117 00:43:05.950397 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.953471 kubelet[2596]: E0117 00:43:05.951447 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.953471 kubelet[2596]: W0117 00:43:05.951457 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.953471 kubelet[2596]: E0117 00:43:05.951469 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.958255 kubelet[2596]: E0117 00:43:05.956609 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.958255 kubelet[2596]: W0117 00:43:05.956625 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.958255 kubelet[2596]: E0117 00:43:05.956642 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.959362 kubelet[2596]: E0117 00:43:05.959318 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.959362 kubelet[2596]: W0117 00:43:05.959348 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.959450 kubelet[2596]: E0117 00:43:05.959366 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.960893 kubelet[2596]: E0117 00:43:05.960787 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.960893 kubelet[2596]: W0117 00:43:05.960825 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.960893 kubelet[2596]: E0117 00:43:05.960848 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.963801 kubelet[2596]: E0117 00:43:05.961321 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.963801 kubelet[2596]: W0117 00:43:05.961335 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.963801 kubelet[2596]: E0117 00:43:05.961350 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:05.962445 systemd[1]: Started cri-containerd-4beaa4970f048766cde17428556511ae68bd0c0426a996948cdefb0cbcc9ec69.scope - libcontainer container 4beaa4970f048766cde17428556511ae68bd0c0426a996948cdefb0cbcc9ec69. Jan 17 00:43:05.964078 kubelet[2596]: E0117 00:43:05.963827 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:05.964078 kubelet[2596]: W0117 00:43:05.963840 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:05.964078 kubelet[2596]: E0117 00:43:05.963854 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.005154 kubelet[2596]: E0117 00:43:06.003495 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.005154 kubelet[2596]: W0117 00:43:06.003557 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.005154 kubelet[2596]: E0117 00:43:06.003584 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.005154 kubelet[2596]: I0117 00:43:06.003652 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa61c0c6-a39e-4c93-94a9-44f82847e39a-socket-dir\") pod \"csi-node-driver-jdngt\" (UID: \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\") " pod="calico-system/csi-node-driver-jdngt" Jan 17 00:43:06.005154 kubelet[2596]: E0117 00:43:06.004505 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.005154 kubelet[2596]: W0117 00:43:06.004522 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.005154 kubelet[2596]: E0117 00:43:06.004606 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.005154 kubelet[2596]: I0117 00:43:06.004711 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fa61c0c6-a39e-4c93-94a9-44f82847e39a-varrun\") pod \"csi-node-driver-jdngt\" (UID: \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\") " pod="calico-system/csi-node-driver-jdngt" Jan 17 00:43:06.010130 kubelet[2596]: E0117 00:43:06.007180 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.010130 kubelet[2596]: W0117 00:43:06.007203 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.010130 kubelet[2596]: E0117 00:43:06.007221 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.014014 kubelet[2596]: E0117 00:43:06.011671 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.014014 kubelet[2596]: W0117 00:43:06.011697 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.014014 kubelet[2596]: E0117 00:43:06.011721 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.014014 kubelet[2596]: E0117 00:43:06.012239 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.014014 kubelet[2596]: W0117 00:43:06.012299 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.014014 kubelet[2596]: E0117 00:43:06.012313 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.014014 kubelet[2596]: I0117 00:43:06.012401 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa61c0c6-a39e-4c93-94a9-44f82847e39a-kubelet-dir\") pod \"csi-node-driver-jdngt\" (UID: \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\") " pod="calico-system/csi-node-driver-jdngt" Jan 17 00:43:06.020177 kubelet[2596]: E0117 00:43:06.017804 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.020177 kubelet[2596]: W0117 00:43:06.017901 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.020177 kubelet[2596]: E0117 00:43:06.017924 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.020177 kubelet[2596]: E0117 00:43:06.019225 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.020177 kubelet[2596]: W0117 00:43:06.019303 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.020177 kubelet[2596]: E0117 00:43:06.019323 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.020177 kubelet[2596]: E0117 00:43:06.020022 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.021382 kubelet[2596]: W0117 00:43:06.020034 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.021382 kubelet[2596]: E0117 00:43:06.020808 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.025833 kubelet[2596]: E0117 00:43:06.025226 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.025833 kubelet[2596]: W0117 00:43:06.025308 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.025833 kubelet[2596]: E0117 00:43:06.025329 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.028140 kubelet[2596]: E0117 00:43:06.027980 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.028140 kubelet[2596]: W0117 00:43:06.027997 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.028496 kubelet[2596]: E0117 00:43:06.028081 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.028496 kubelet[2596]: I0117 00:43:06.028407 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa61c0c6-a39e-4c93-94a9-44f82847e39a-registration-dir\") pod \"csi-node-driver-jdngt\" (UID: \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\") " pod="calico-system/csi-node-driver-jdngt" Jan 17 00:43:06.031262 kubelet[2596]: E0117 00:43:06.030636 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.031262 kubelet[2596]: W0117 00:43:06.030704 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.031262 kubelet[2596]: E0117 00:43:06.030722 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.031262 kubelet[2596]: I0117 00:43:06.030786 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7txjf\" (UniqueName: \"kubernetes.io/projected/fa61c0c6-a39e-4c93-94a9-44f82847e39a-kube-api-access-7txjf\") pod \"csi-node-driver-jdngt\" (UID: \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\") " pod="calico-system/csi-node-driver-jdngt" Jan 17 00:43:06.032867 kubelet[2596]: E0117 00:43:06.032573 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.032867 kubelet[2596]: W0117 00:43:06.032589 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.032867 kubelet[2596]: E0117 00:43:06.032608 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.036378 kubelet[2596]: E0117 00:43:06.033201 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.036378 kubelet[2596]: W0117 00:43:06.033215 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.036378 kubelet[2596]: E0117 00:43:06.033230 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.036999 kubelet[2596]: E0117 00:43:06.036877 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.036999 kubelet[2596]: W0117 00:43:06.036891 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.036999 kubelet[2596]: E0117 00:43:06.036906 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.042084 kubelet[2596]: E0117 00:43:06.037806 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.042084 kubelet[2596]: W0117 00:43:06.039166 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.042084 kubelet[2596]: E0117 00:43:06.039190 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.043568 kubelet[2596]: E0117 00:43:06.043507 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:06.044834 containerd[1476]: time="2026-01-17T00:43:06.044792657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-72g2h,Uid:3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e,Namespace:calico-system,Attempt:0,}" Jan 17 00:43:06.133780 containerd[1476]: time="2026-01-17T00:43:06.130971777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c7bc8f4-v8rzj,Uid:e5d17ae5-2c00-47af-bd3d-e13401fb9f78,Namespace:calico-system,Attempt:0,} returns sandbox id \"4beaa4970f048766cde17428556511ae68bd0c0426a996948cdefb0cbcc9ec69\"" Jan 17 00:43:06.135452 kubelet[2596]: E0117 00:43:06.134756 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:06.136491 containerd[1476]: time="2026-01-17T00:43:06.136460824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:43:06.140496 kubelet[2596]: E0117 00:43:06.138760 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.140496 kubelet[2596]: W0117 00:43:06.138784 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.140496 kubelet[2596]: E0117 00:43:06.138879 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.142787 kubelet[2596]: E0117 00:43:06.142713 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.142787 kubelet[2596]: W0117 00:43:06.142772 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.142939 kubelet[2596]: E0117 00:43:06.142806 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.143712 kubelet[2596]: E0117 00:43:06.143504 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.143712 kubelet[2596]: W0117 00:43:06.143554 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.143712 kubelet[2596]: E0117 00:43:06.143585 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.146592 kubelet[2596]: E0117 00:43:06.146427 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.146592 kubelet[2596]: W0117 00:43:06.146452 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.146592 kubelet[2596]: E0117 00:43:06.146515 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.166186 kubelet[2596]: E0117 00:43:06.165664 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.166186 kubelet[2596]: W0117 00:43:06.165691 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.166186 kubelet[2596]: E0117 00:43:06.165719 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.208314 containerd[1476]: time="2026-01-17T00:43:06.195464988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:06.210346 containerd[1476]: time="2026-01-17T00:43:06.208941236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:06.210346 containerd[1476]: time="2026-01-17T00:43:06.208975720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:06.211283 containerd[1476]: time="2026-01-17T00:43:06.211211168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:06.218855 kubelet[2596]: E0117 00:43:06.177893 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.218855 kubelet[2596]: W0117 00:43:06.217073 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.218855 kubelet[2596]: E0117 00:43:06.217172 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.218855 kubelet[2596]: E0117 00:43:06.217839 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.218855 kubelet[2596]: W0117 00:43:06.217856 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.218855 kubelet[2596]: E0117 00:43:06.217873 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.218855 kubelet[2596]: E0117 00:43:06.218272 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.218855 kubelet[2596]: W0117 00:43:06.218287 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.218855 kubelet[2596]: E0117 00:43:06.218312 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.218855 kubelet[2596]: E0117 00:43:06.218625 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.219399 kubelet[2596]: W0117 00:43:06.218638 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.219399 kubelet[2596]: E0117 00:43:06.218663 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.219399 kubelet[2596]: E0117 00:43:06.219198 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.219399 kubelet[2596]: W0117 00:43:06.219215 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.219399 kubelet[2596]: E0117 00:43:06.219230 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.219693 kubelet[2596]: E0117 00:43:06.219622 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.219693 kubelet[2596]: W0117 00:43:06.219652 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.219693 kubelet[2596]: E0117 00:43:06.219666 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.232357 kubelet[2596]: E0117 00:43:06.231523 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.232357 kubelet[2596]: W0117 00:43:06.231597 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.232357 kubelet[2596]: E0117 00:43:06.231633 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.245185 kubelet[2596]: E0117 00:43:06.238770 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.245185 kubelet[2596]: W0117 00:43:06.238795 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.245185 kubelet[2596]: E0117 00:43:06.238824 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.245185 kubelet[2596]: E0117 00:43:06.243638 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.245185 kubelet[2596]: W0117 00:43:06.243657 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.245185 kubelet[2596]: E0117 00:43:06.243682 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.267684 kubelet[2596]: E0117 00:43:06.267579 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.267684 kubelet[2596]: W0117 00:43:06.267608 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.267684 kubelet[2596]: E0117 00:43:06.267638 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.273330 kubelet[2596]: E0117 00:43:06.269330 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.273330 kubelet[2596]: W0117 00:43:06.269357 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.273330 kubelet[2596]: E0117 00:43:06.269376 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.273330 kubelet[2596]: E0117 00:43:06.271804 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.273330 kubelet[2596]: W0117 00:43:06.271818 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.273330 kubelet[2596]: E0117 00:43:06.271994 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.276752 kubelet[2596]: E0117 00:43:06.273626 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.276752 kubelet[2596]: W0117 00:43:06.273639 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.276752 kubelet[2596]: E0117 00:43:06.273654 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.276752 kubelet[2596]: E0117 00:43:06.276477 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.276752 kubelet[2596]: W0117 00:43:06.276490 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.276752 kubelet[2596]: E0117 00:43:06.276505 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.278886 kubelet[2596]: E0117 00:43:06.278309 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.278886 kubelet[2596]: W0117 00:43:06.278324 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.278886 kubelet[2596]: E0117 00:43:06.278338 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.281235 kubelet[2596]: E0117 00:43:06.279776 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.281235 kubelet[2596]: W0117 00:43:06.279805 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.281235 kubelet[2596]: E0117 00:43:06.279820 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.281235 kubelet[2596]: E0117 00:43:06.280366 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.281235 kubelet[2596]: W0117 00:43:06.280378 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.281235 kubelet[2596]: E0117 00:43:06.280391 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.285922 kubelet[2596]: E0117 00:43:06.285890 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.285922 kubelet[2596]: W0117 00:43:06.285909 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.286179 kubelet[2596]: E0117 00:43:06.285924 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.286633 kubelet[2596]: E0117 00:43:06.286461 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.286633 kubelet[2596]: W0117 00:43:06.286477 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.286633 kubelet[2596]: E0117 00:43:06.286490 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.286954 kubelet[2596]: E0117 00:43:06.286939 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.288589 kubelet[2596]: W0117 00:43:06.287018 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.291947 kubelet[2596]: E0117 00:43:06.288765 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.291747 systemd[1]: Started cri-containerd-566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f.scope - libcontainer container 566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f. Jan 17 00:43:06.407509 kubelet[2596]: E0117 00:43:06.406860 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:06.407509 kubelet[2596]: W0117 00:43:06.406889 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:06.407509 kubelet[2596]: E0117 00:43:06.406968 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:06.433921 containerd[1476]: time="2026-01-17T00:43:06.433573649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-72g2h,Uid:3ed1b03d-96a0-40ab-a4ce-f6d85f63d53e,Namespace:calico-system,Attempt:0,} returns sandbox id \"566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f\"" Jan 17 00:43:06.443120 kubelet[2596]: E0117 00:43:06.440970 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:07.640246 kubelet[2596]: E0117 00:43:07.638804 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:07.699662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount884577358.mount: Deactivated successfully. Jan 17 00:43:09.655746 kubelet[2596]: E0117 00:43:09.653900 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:11.061608 containerd[1476]: time="2026-01-17T00:43:11.061354313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:11.066155 containerd[1476]: time="2026-01-17T00:43:11.065023364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:43:11.072227 containerd[1476]: time="2026-01-17T00:43:11.069756064Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:11.076877 containerd[1476]: time="2026-01-17T00:43:11.075693886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:11.079981 containerd[1476]: time="2026-01-17T00:43:11.079551393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.942939038s" Jan 17 00:43:11.079981 containerd[1476]: time="2026-01-17T00:43:11.079623788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:43:11.083530 containerd[1476]: time="2026-01-17T00:43:11.083461740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:43:11.153211 containerd[1476]: time="2026-01-17T00:43:11.152860129Z" level=info msg="CreateContainer within sandbox \"4beaa4970f048766cde17428556511ae68bd0c0426a996948cdefb0cbcc9ec69\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:43:11.201138 containerd[1476]: time="2026-01-17T00:43:11.201011048Z" level=info msg="CreateContainer within sandbox \"4beaa4970f048766cde17428556511ae68bd0c0426a996948cdefb0cbcc9ec69\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1bfc06b6179ef33cf5906eed3cb2bc08660904855ae60c62ce07aa22fe48aa7e\"" Jan 17 00:43:11.202682 containerd[1476]: time="2026-01-17T00:43:11.202548474Z" level=info msg="StartContainer for \"1bfc06b6179ef33cf5906eed3cb2bc08660904855ae60c62ce07aa22fe48aa7e\"" Jan 17 00:43:11.346281 systemd[1]: Started cri-containerd-1bfc06b6179ef33cf5906eed3cb2bc08660904855ae60c62ce07aa22fe48aa7e.scope - libcontainer container 1bfc06b6179ef33cf5906eed3cb2bc08660904855ae60c62ce07aa22fe48aa7e. Jan 17 00:43:11.510047 containerd[1476]: time="2026-01-17T00:43:11.507249709Z" level=info msg="StartContainer for \"1bfc06b6179ef33cf5906eed3cb2bc08660904855ae60c62ce07aa22fe48aa7e\" returns successfully" Jan 17 00:43:11.638817 kubelet[2596]: E0117 00:43:11.638340 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:11.638817 kubelet[2596]: E0117 00:43:11.638479 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:11.722653 kubelet[2596]: E0117 00:43:11.722571 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.722653 kubelet[2596]: W0117 00:43:11.722628 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.722653 kubelet[2596]: E0117 00:43:11.722660 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.730639 kubelet[2596]: E0117 00:43:11.729420 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.730639 kubelet[2596]: W0117 00:43:11.729493 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.730639 kubelet[2596]: E0117 00:43:11.729519 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.739199 kubelet[2596]: E0117 00:43:11.736181 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.739199 kubelet[2596]: W0117 00:43:11.736204 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.739199 kubelet[2596]: E0117 00:43:11.736227 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.739199 kubelet[2596]: E0117 00:43:11.736888 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.739199 kubelet[2596]: W0117 00:43:11.736904 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.739199 kubelet[2596]: E0117 00:43:11.736926 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.739199 kubelet[2596]: E0117 00:43:11.737713 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.739199 kubelet[2596]: W0117 00:43:11.737727 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.739199 kubelet[2596]: E0117 00:43:11.737845 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.739199 kubelet[2596]: E0117 00:43:11.738838 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754184 kubelet[2596]: W0117 00:43:11.738852 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754184 kubelet[2596]: E0117 00:43:11.738867 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754184 kubelet[2596]: E0117 00:43:11.739621 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754184 kubelet[2596]: W0117 00:43:11.739635 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754184 kubelet[2596]: E0117 00:43:11.739651 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754184 kubelet[2596]: E0117 00:43:11.748882 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754184 kubelet[2596]: W0117 00:43:11.748904 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754184 kubelet[2596]: E0117 00:43:11.748978 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754184 kubelet[2596]: E0117 00:43:11.749405 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754184 kubelet[2596]: W0117 00:43:11.749419 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754537 kubelet[2596]: E0117 00:43:11.749435 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754537 kubelet[2596]: E0117 00:43:11.749722 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754537 kubelet[2596]: W0117 00:43:11.749734 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754537 kubelet[2596]: E0117 00:43:11.749752 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754537 kubelet[2596]: E0117 00:43:11.750158 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754537 kubelet[2596]: W0117 00:43:11.750173 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754537 kubelet[2596]: E0117 00:43:11.750187 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754537 kubelet[2596]: E0117 00:43:11.750492 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754537 kubelet[2596]: W0117 00:43:11.750505 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754537 kubelet[2596]: E0117 00:43:11.750520 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754845 kubelet[2596]: E0117 00:43:11.750834 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754845 kubelet[2596]: W0117 00:43:11.750848 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754845 kubelet[2596]: E0117 00:43:11.750863 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754845 kubelet[2596]: E0117 00:43:11.751259 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754845 kubelet[2596]: W0117 00:43:11.751273 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754845 kubelet[2596]: E0117 00:43:11.751288 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754845 kubelet[2596]: E0117 00:43:11.751569 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.754845 kubelet[2596]: W0117 00:43:11.751583 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.754845 kubelet[2596]: E0117 00:43:11.751597 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.754845 kubelet[2596]: E0117 00:43:11.754361 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.757320 kubelet[2596]: W0117 00:43:11.754374 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.757320 kubelet[2596]: E0117 00:43:11.754387 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.762082 kubelet[2596]: E0117 00:43:11.761830 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.762082 kubelet[2596]: W0117 00:43:11.761851 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.762082 kubelet[2596]: E0117 00:43:11.761871 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.762364 kubelet[2596]: E0117 00:43:11.762333 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.762364 kubelet[2596]: W0117 00:43:11.762351 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.762434 kubelet[2596]: E0117 00:43:11.762368 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.762866 kubelet[2596]: E0117 00:43:11.762802 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.762866 kubelet[2596]: W0117 00:43:11.762842 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.762866 kubelet[2596]: E0117 00:43:11.762859 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.764052 kubelet[2596]: E0117 00:43:11.763317 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.764052 kubelet[2596]: W0117 00:43:11.763334 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.764052 kubelet[2596]: E0117 00:43:11.763350 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.764491 kubelet[2596]: E0117 00:43:11.764428 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.764491 kubelet[2596]: W0117 00:43:11.764470 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.764491 kubelet[2596]: E0117 00:43:11.764487 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.765364 kubelet[2596]: E0117 00:43:11.765302 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.765364 kubelet[2596]: W0117 00:43:11.765341 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.765364 kubelet[2596]: E0117 00:43:11.765355 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.767420 kubelet[2596]: E0117 00:43:11.767240 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.767420 kubelet[2596]: W0117 00:43:11.767256 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.767420 kubelet[2596]: E0117 00:43:11.767272 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.767785 kubelet[2596]: E0117 00:43:11.767769 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.771208 kubelet[2596]: W0117 00:43:11.767856 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.771208 kubelet[2596]: E0117 00:43:11.767873 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.771849 kubelet[2596]: E0117 00:43:11.771783 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.771849 kubelet[2596]: W0117 00:43:11.771829 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.777036 kubelet[2596]: E0117 00:43:11.771853 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.778174 kubelet[2596]: E0117 00:43:11.777052 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.778174 kubelet[2596]: W0117 00:43:11.777068 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.778174 kubelet[2596]: E0117 00:43:11.777135 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.779888 kubelet[2596]: E0117 00:43:11.779866 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.780215 kubelet[2596]: W0117 00:43:11.780024 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.780215 kubelet[2596]: E0117 00:43:11.780049 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.780554 kubelet[2596]: E0117 00:43:11.780536 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.780818 kubelet[2596]: W0117 00:43:11.780620 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.780818 kubelet[2596]: E0117 00:43:11.780639 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.784785 kubelet[2596]: E0117 00:43:11.784668 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.784785 kubelet[2596]: W0117 00:43:11.784691 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.784785 kubelet[2596]: E0117 00:43:11.784709 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.787439 kubelet[2596]: E0117 00:43:11.787247 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.787439 kubelet[2596]: W0117 00:43:11.787266 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.787439 kubelet[2596]: E0117 00:43:11.787284 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.790756 kubelet[2596]: E0117 00:43:11.790716 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.795150 kubelet[2596]: W0117 00:43:11.791643 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.795150 kubelet[2596]: E0117 00:43:11.791676 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.802546 kubelet[2596]: E0117 00:43:11.801997 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.802546 kubelet[2596]: W0117 00:43:11.802023 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.802546 kubelet[2596]: E0117 00:43:11.802051 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:11.802546 kubelet[2596]: E0117 00:43:11.802462 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:11.802546 kubelet[2596]: W0117 00:43:11.802476 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:11.802546 kubelet[2596]: E0117 00:43:11.802491 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.477434 containerd[1476]: time="2026-01-17T00:43:12.477261294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:12.483037 containerd[1476]: time="2026-01-17T00:43:12.482610467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:43:12.485516 containerd[1476]: time="2026-01-17T00:43:12.485329922Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:12.499881 containerd[1476]: time="2026-01-17T00:43:12.499821945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:12.503137 containerd[1476]: time="2026-01-17T00:43:12.501409735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.417882123s" Jan 17 00:43:12.503137 containerd[1476]: time="2026-01-17T00:43:12.501459958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:43:12.538706 containerd[1476]: time="2026-01-17T00:43:12.536535825Z" level=info msg="CreateContainer within sandbox \"566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:43:12.658235 kubelet[2596]: E0117 00:43:12.658195 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:12.668190 kubelet[2596]: E0117 00:43:12.667876 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.668190 kubelet[2596]: W0117 00:43:12.667905 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.668190 kubelet[2596]: E0117 00:43:12.667974 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.668700 kubelet[2596]: E0117 00:43:12.668679 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.668876 kubelet[2596]: W0117 00:43:12.668775 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.668876 kubelet[2596]: E0117 00:43:12.668799 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.672760 kubelet[2596]: E0117 00:43:12.672570 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.672760 kubelet[2596]: W0117 00:43:12.672590 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.672760 kubelet[2596]: E0117 00:43:12.672611 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.675524 kubelet[2596]: E0117 00:43:12.674194 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.675524 kubelet[2596]: W0117 00:43:12.674291 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.675524 kubelet[2596]: E0117 00:43:12.674310 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.679709 kubelet[2596]: E0117 00:43:12.678531 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.679709 kubelet[2596]: W0117 00:43:12.678548 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.679709 kubelet[2596]: E0117 00:43:12.678565 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.680278 kubelet[2596]: E0117 00:43:12.680257 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.680550 kubelet[2596]: W0117 00:43:12.680404 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.680813 kubelet[2596]: E0117 00:43:12.680794 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.684180 kubelet[2596]: E0117 00:43:12.683911 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.684180 kubelet[2596]: W0117 00:43:12.683969 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.684180 kubelet[2596]: E0117 00:43:12.683990 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.687978 kubelet[2596]: E0117 00:43:12.687742 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.687978 kubelet[2596]: W0117 00:43:12.687764 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.687978 kubelet[2596]: E0117 00:43:12.687787 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.690289 kubelet[2596]: E0117 00:43:12.689136 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.690289 kubelet[2596]: W0117 00:43:12.689151 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.690289 kubelet[2596]: E0117 00:43:12.689182 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.690791 kubelet[2596]: E0117 00:43:12.690772 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.691579 kubelet[2596]: W0117 00:43:12.691396 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.691579 kubelet[2596]: E0117 00:43:12.691422 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.694725 kubelet[2596]: E0117 00:43:12.693370 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.702230 kubelet[2596]: W0117 00:43:12.695255 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.702230 kubelet[2596]: E0117 00:43:12.695325 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.702230 kubelet[2596]: E0117 00:43:12.697644 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.702230 kubelet[2596]: W0117 00:43:12.697659 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.702230 kubelet[2596]: E0117 00:43:12.697676 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.702477 kubelet[2596]: E0117 00:43:12.702372 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.702477 kubelet[2596]: W0117 00:43:12.702386 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.702477 kubelet[2596]: E0117 00:43:12.702402 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.702892 kubelet[2596]: E0117 00:43:12.702810 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.702892 kubelet[2596]: W0117 00:43:12.702823 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.702892 kubelet[2596]: E0117 00:43:12.702836 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.706126 kubelet[2596]: E0117 00:43:12.703712 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.706126 kubelet[2596]: W0117 00:43:12.703743 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.706126 kubelet[2596]: E0117 00:43:12.703757 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.706758 containerd[1476]: time="2026-01-17T00:43:12.706626194Z" level=info msg="CreateContainer within sandbox \"566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526\"" Jan 17 00:43:12.713798 kubelet[2596]: E0117 00:43:12.709283 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.713798 kubelet[2596]: W0117 00:43:12.709304 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.713798 kubelet[2596]: E0117 00:43:12.709321 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.713798 kubelet[2596]: E0117 00:43:12.710566 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.713798 kubelet[2596]: W0117 00:43:12.710582 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.713798 kubelet[2596]: E0117 00:43:12.710596 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.713798 kubelet[2596]: E0117 00:43:12.710883 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.713798 kubelet[2596]: W0117 00:43:12.710894 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.713798 kubelet[2596]: E0117 00:43:12.710905 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.713798 kubelet[2596]: E0117 00:43:12.711310 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.714413 kubelet[2596]: W0117 00:43:12.711321 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.714413 kubelet[2596]: E0117 00:43:12.711333 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.714413 kubelet[2596]: E0117 00:43:12.711603 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.714413 kubelet[2596]: W0117 00:43:12.711615 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.714413 kubelet[2596]: E0117 00:43:12.711630 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.714413 kubelet[2596]: E0117 00:43:12.713315 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.714413 kubelet[2596]: W0117 00:43:12.713328 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.714413 kubelet[2596]: E0117 00:43:12.713341 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.718579 containerd[1476]: time="2026-01-17T00:43:12.717256971Z" level=info msg="StartContainer for \"8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526\"" Jan 17 00:43:12.718786 kubelet[2596]: E0117 00:43:12.718340 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.718786 kubelet[2596]: W0117 00:43:12.718355 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.718786 kubelet[2596]: E0117 00:43:12.718372 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.718906 kubelet[2596]: E0117 00:43:12.718877 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.718906 kubelet[2596]: W0117 00:43:12.718888 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.718906 kubelet[2596]: E0117 00:43:12.718903 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.721854 kubelet[2596]: E0117 00:43:12.719310 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.721854 kubelet[2596]: W0117 00:43:12.719324 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.721854 kubelet[2596]: E0117 00:43:12.719336 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.721854 kubelet[2596]: E0117 00:43:12.719582 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.721854 kubelet[2596]: W0117 00:43:12.719593 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.721854 kubelet[2596]: E0117 00:43:12.719604 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.721854 kubelet[2596]: E0117 00:43:12.719834 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.721854 kubelet[2596]: W0117 00:43:12.719844 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.721854 kubelet[2596]: E0117 00:43:12.719854 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.725844 kubelet[2596]: E0117 00:43:12.725408 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.725844 kubelet[2596]: W0117 00:43:12.725428 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.725844 kubelet[2596]: E0117 00:43:12.725445 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.731556 kubelet[2596]: E0117 00:43:12.730741 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.731556 kubelet[2596]: W0117 00:43:12.730774 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.731556 kubelet[2596]: E0117 00:43:12.730794 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.751775 kubelet[2596]: E0117 00:43:12.735446 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.751775 kubelet[2596]: W0117 00:43:12.735468 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.751775 kubelet[2596]: E0117 00:43:12.735489 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.751775 kubelet[2596]: E0117 00:43:12.735824 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.751775 kubelet[2596]: W0117 00:43:12.735835 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.751775 kubelet[2596]: E0117 00:43:12.735850 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.751775 kubelet[2596]: E0117 00:43:12.736295 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.751775 kubelet[2596]: W0117 00:43:12.736308 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.751775 kubelet[2596]: E0117 00:43:12.736321 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.751775 kubelet[2596]: E0117 00:43:12.736621 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.754408 kubelet[2596]: W0117 00:43:12.736632 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.754408 kubelet[2596]: E0117 00:43:12.736644 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.760056 kubelet[2596]: I0117 00:43:12.756391 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78c7bc8f4-v8rzj" podStartSLOduration=2.811191515 podStartE2EDuration="7.756367022s" podCreationTimestamp="2026-01-17 00:43:05 +0000 UTC" firstStartedPulling="2026-01-17 00:43:06.13614478 +0000 UTC m=+39.851637136" lastFinishedPulling="2026-01-17 00:43:11.081320288 +0000 UTC m=+44.796812643" observedRunningTime="2026-01-17 00:43:11.692874885 +0000 UTC m=+45.408367292" watchObservedRunningTime="2026-01-17 00:43:12.756367022 +0000 UTC m=+46.471859377" Jan 17 00:43:12.768196 kubelet[2596]: E0117 00:43:12.768051 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:43:12.768196 kubelet[2596]: W0117 00:43:12.768080 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:43:12.768196 kubelet[2596]: E0117 00:43:12.768150 2596 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:43:12.950849 systemd[1]: Started cri-containerd-8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526.scope - libcontainer container 8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526. Jan 17 00:43:13.079271 containerd[1476]: time="2026-01-17T00:43:13.068517886Z" level=info msg="StartContainer for \"8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526\" returns successfully" Jan 17 00:43:13.106553 systemd[1]: cri-containerd-8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526.scope: Deactivated successfully. Jan 17 00:43:13.230159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526-rootfs.mount: Deactivated successfully. Jan 17 00:43:13.529673 containerd[1476]: time="2026-01-17T00:43:13.529249252Z" level=info msg="shim disconnected" id=8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526 namespace=k8s.io Jan 17 00:43:13.529673 containerd[1476]: time="2026-01-17T00:43:13.529357041Z" level=warning msg="cleaning up after shim disconnected" id=8719a17eaffec5c6e3617ecd1918b236b2fbb7359ec6639532e38618c52f0526 namespace=k8s.io Jan 17 00:43:13.529673 containerd[1476]: time="2026-01-17T00:43:13.529400673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:13.641964 kubelet[2596]: E0117 00:43:13.640528 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:13.672081 kubelet[2596]: E0117 00:43:13.670727 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:13.672081 kubelet[2596]: E0117 00:43:13.671336 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:13.680572 containerd[1476]: time="2026-01-17T00:43:13.674190401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:43:15.639554 kubelet[2596]: E0117 00:43:15.639432 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:17.639485 kubelet[2596]: E0117 00:43:17.639398 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:18.977670 containerd[1476]: time="2026-01-17T00:43:18.976597201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:18.979772 containerd[1476]: time="2026-01-17T00:43:18.979721305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:43:18.986046 containerd[1476]: time="2026-01-17T00:43:18.983793511Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:18.988346 containerd[1476]: time="2026-01-17T00:43:18.988272683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:18.989611 containerd[1476]: time="2026-01-17T00:43:18.989409272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.315171582s" Jan 17 00:43:18.989611 containerd[1476]: time="2026-01-17T00:43:18.989461118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:43:19.001024 containerd[1476]: time="2026-01-17T00:43:19.000934067Z" level=info msg="CreateContainer within sandbox \"566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:43:19.043279 containerd[1476]: time="2026-01-17T00:43:19.042995512Z" level=info msg="CreateContainer within sandbox \"566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885\"" Jan 17 00:43:19.044150 containerd[1476]: time="2026-01-17T00:43:19.043915761Z" level=info msg="StartContainer for \"61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885\"" Jan 17 00:43:19.138374 systemd[1]: Started cri-containerd-61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885.scope - libcontainer container 61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885. Jan 17 00:43:19.263820 containerd[1476]: time="2026-01-17T00:43:19.263393227Z" level=info msg="StartContainer for \"61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885\" returns successfully" Jan 17 00:43:19.644407 kubelet[2596]: E0117 00:43:19.644165 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:19.706567 kubelet[2596]: E0117 00:43:19.705901 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:20.717363 kubelet[2596]: E0117 00:43:20.716320 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:20.902661 systemd[1]: cri-containerd-61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885.scope: Deactivated successfully. Jan 17 00:43:20.903923 systemd[1]: cri-containerd-61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885.scope: Consumed 1.402s CPU time. Jan 17 00:43:20.959637 kubelet[2596]: I0117 00:43:20.959065 2596 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:43:21.004788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885-rootfs.mount: Deactivated successfully. Jan 17 00:43:21.285530 containerd[1476]: time="2026-01-17T00:43:21.285003461Z" level=info msg="shim disconnected" id=61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885 namespace=k8s.io Jan 17 00:43:21.285530 containerd[1476]: time="2026-01-17T00:43:21.285076356Z" level=warning msg="cleaning up after shim disconnected" id=61b6fdd2e9ec47116809cf4213151ce67cac06ef3a4d0d5de9290a7be9fec885 namespace=k8s.io Jan 17 00:43:21.285530 containerd[1476]: time="2026-01-17T00:43:21.285132871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:21.326931 systemd[1]: Created slice kubepods-besteffort-podae016e77_356a_4fd1_8a79_0362524f48fd.slice - libcontainer container kubepods-besteffort-podae016e77_356a_4fd1_8a79_0362524f48fd.slice. Jan 17 00:43:21.345034 kubelet[2596]: I0117 00:43:21.344711 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-backend-key-pair\") pod \"whisker-6588bf47fd-nsnxs\" (UID: \"ae016e77-356a-4fd1-8a79-0362524f48fd\") " pod="calico-system/whisker-6588bf47fd-nsnxs" Jan 17 00:43:21.345922 kubelet[2596]: I0117 00:43:21.345349 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976xk\" (UniqueName: \"kubernetes.io/projected/ae016e77-356a-4fd1-8a79-0362524f48fd-kube-api-access-976xk\") pod \"whisker-6588bf47fd-nsnxs\" (UID: \"ae016e77-356a-4fd1-8a79-0362524f48fd\") " pod="calico-system/whisker-6588bf47fd-nsnxs" Jan 17 00:43:21.345922 kubelet[2596]: I0117 00:43:21.345391 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-ca-bundle\") pod \"whisker-6588bf47fd-nsnxs\" (UID: \"ae016e77-356a-4fd1-8a79-0362524f48fd\") " pod="calico-system/whisker-6588bf47fd-nsnxs" Jan 17 00:43:21.368914 systemd[1]: Created slice kubepods-besteffort-pod09a01101_a646_4d50_93a3_7a41aecfea23.slice - libcontainer container kubepods-besteffort-pod09a01101_a646_4d50_93a3_7a41aecfea23.slice. Jan 17 00:43:21.407575 systemd[1]: Created slice kubepods-burstable-pod1abfdd34_176a_4bd5_8495_196edf2ca012.slice - libcontainer container kubepods-burstable-pod1abfdd34_176a_4bd5_8495_196edf2ca012.slice. Jan 17 00:43:21.439222 systemd[1]: Created slice kubepods-besteffort-pod084547cb_aa8f_42ba_b949_f26ba954f5f8.slice - libcontainer container kubepods-besteffort-pod084547cb_aa8f_42ba_b949_f26ba954f5f8.slice. Jan 17 00:43:21.450685 kubelet[2596]: I0117 00:43:21.446490 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7dqh\" (UniqueName: \"kubernetes.io/projected/084547cb-aa8f-42ba-b949-f26ba954f5f8-kube-api-access-d7dqh\") pod \"calico-kube-controllers-674c9b8465-rpks6\" (UID: \"084547cb-aa8f-42ba-b949-f26ba954f5f8\") " pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" Jan 17 00:43:21.450685 kubelet[2596]: I0117 00:43:21.446689 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/084547cb-aa8f-42ba-b949-f26ba954f5f8-tigera-ca-bundle\") pod \"calico-kube-controllers-674c9b8465-rpks6\" (UID: \"084547cb-aa8f-42ba-b949-f26ba954f5f8\") " pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" Jan 17 00:43:21.450685 kubelet[2596]: I0117 00:43:21.446829 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt8q\" (UniqueName: \"kubernetes.io/projected/09a01101-a646-4d50-93a3-7a41aecfea23-kube-api-access-cbt8q\") pod \"calico-apiserver-76d788f98c-msd48\" (UID: \"09a01101-a646-4d50-93a3-7a41aecfea23\") " pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" Jan 17 00:43:21.450685 kubelet[2596]: I0117 00:43:21.446969 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1abfdd34-176a-4bd5-8495-196edf2ca012-config-volume\") pod \"coredns-66bc5c9577-j7s62\" (UID: \"1abfdd34-176a-4bd5-8495-196edf2ca012\") " pod="kube-system/coredns-66bc5c9577-j7s62" Jan 17 00:43:21.450685 kubelet[2596]: I0117 00:43:21.447594 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec63a8db-6e49-4fec-8b7a-9f9042c1bf91-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-n22c9\" (UID: \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\") " pod="calico-system/goldmane-7c778bb748-n22c9" Jan 17 00:43:21.452443 kubelet[2596]: I0117 00:43:21.447619 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ec63a8db-6e49-4fec-8b7a-9f9042c1bf91-goldmane-key-pair\") pod \"goldmane-7c778bb748-n22c9\" (UID: \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\") " pod="calico-system/goldmane-7c778bb748-n22c9" Jan 17 00:43:21.452443 kubelet[2596]: I0117 00:43:21.447647 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pltn\" (UniqueName: \"kubernetes.io/projected/1abfdd34-176a-4bd5-8495-196edf2ca012-kube-api-access-9pltn\") pod \"coredns-66bc5c9577-j7s62\" (UID: \"1abfdd34-176a-4bd5-8495-196edf2ca012\") " pod="kube-system/coredns-66bc5c9577-j7s62" Jan 17 00:43:21.452443 kubelet[2596]: I0117 00:43:21.447695 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec63a8db-6e49-4fec-8b7a-9f9042c1bf91-config\") pod \"goldmane-7c778bb748-n22c9\" (UID: \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\") " pod="calico-system/goldmane-7c778bb748-n22c9" Jan 17 00:43:21.452443 kubelet[2596]: I0117 00:43:21.447725 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbxq\" (UniqueName: \"kubernetes.io/projected/ec63a8db-6e49-4fec-8b7a-9f9042c1bf91-kube-api-access-9bbxq\") pod \"goldmane-7c778bb748-n22c9\" (UID: \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\") " pod="calico-system/goldmane-7c778bb748-n22c9" Jan 17 00:43:21.452443 kubelet[2596]: I0117 00:43:21.447771 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09a01101-a646-4d50-93a3-7a41aecfea23-calico-apiserver-certs\") pod \"calico-apiserver-76d788f98c-msd48\" (UID: \"09a01101-a646-4d50-93a3-7a41aecfea23\") " pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" Jan 17 00:43:21.470267 systemd[1]: Created slice kubepods-burstable-podd4177c8b_a26b_419d_9b18_e9e581c975bb.slice - libcontainer container kubepods-burstable-podd4177c8b_a26b_419d_9b18_e9e581c975bb.slice. Jan 17 00:43:21.500336 systemd[1]: Created slice kubepods-besteffort-podec63a8db_6e49_4fec_8b7a_9f9042c1bf91.slice - libcontainer container kubepods-besteffort-podec63a8db_6e49_4fec_8b7a_9f9042c1bf91.slice. Jan 17 00:43:21.535002 systemd[1]: Created slice kubepods-besteffort-pod61ae5c95_165c_41b7_b9c1_05cec94160e8.slice - libcontainer container kubepods-besteffort-pod61ae5c95_165c_41b7_b9c1_05cec94160e8.slice. Jan 17 00:43:21.549259 kubelet[2596]: I0117 00:43:21.548808 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61ae5c95-165c-41b7-b9c1-05cec94160e8-calico-apiserver-certs\") pod \"calico-apiserver-76d788f98c-gwfnp\" (UID: \"61ae5c95-165c-41b7-b9c1-05cec94160e8\") " pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" Jan 17 00:43:21.549259 kubelet[2596]: I0117 00:43:21.548953 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4177c8b-a26b-419d-9b18-e9e581c975bb-config-volume\") pod \"coredns-66bc5c9577-fk5gs\" (UID: \"d4177c8b-a26b-419d-9b18-e9e581c975bb\") " pod="kube-system/coredns-66bc5c9577-fk5gs" Jan 17 00:43:21.549259 kubelet[2596]: I0117 00:43:21.548993 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnw9\" (UniqueName: \"kubernetes.io/projected/d4177c8b-a26b-419d-9b18-e9e581c975bb-kube-api-access-mtnw9\") pod \"coredns-66bc5c9577-fk5gs\" (UID: \"d4177c8b-a26b-419d-9b18-e9e581c975bb\") " pod="kube-system/coredns-66bc5c9577-fk5gs" Jan 17 00:43:21.549259 kubelet[2596]: I0117 00:43:21.549040 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8srw5\" (UniqueName: \"kubernetes.io/projected/61ae5c95-165c-41b7-b9c1-05cec94160e8-kube-api-access-8srw5\") pod \"calico-apiserver-76d788f98c-gwfnp\" (UID: \"61ae5c95-165c-41b7-b9c1-05cec94160e8\") " pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" Jan 17 00:43:21.655809 systemd[1]: Created slice kubepods-besteffort-podfa61c0c6_a39e_4c93_94a9_44f82847e39a.slice - libcontainer container kubepods-besteffort-podfa61c0c6_a39e_4c93_94a9_44f82847e39a.slice. Jan 17 00:43:21.692388 containerd[1476]: time="2026-01-17T00:43:21.686916342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6588bf47fd-nsnxs,Uid:ae016e77-356a-4fd1-8a79-0362524f48fd,Namespace:calico-system,Attempt:0,}" Jan 17 00:43:21.706772 containerd[1476]: time="2026-01-17T00:43:21.706568252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jdngt,Uid:fa61c0c6-a39e-4c93-94a9-44f82847e39a,Namespace:calico-system,Attempt:0,}" Jan 17 00:43:21.720473 containerd[1476]: time="2026-01-17T00:43:21.720402273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-msd48,Uid:09a01101-a646-4d50-93a3-7a41aecfea23,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:43:21.739651 kubelet[2596]: E0117 00:43:21.739554 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:21.742534 kubelet[2596]: E0117 00:43:21.740380 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:21.742826 containerd[1476]: time="2026-01-17T00:43:21.741660074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j7s62,Uid:1abfdd34-176a-4bd5-8495-196edf2ca012,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:21.748743 containerd[1476]: time="2026-01-17T00:43:21.748385205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:43:21.778537 containerd[1476]: time="2026-01-17T00:43:21.777953608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674c9b8465-rpks6,Uid:084547cb-aa8f-42ba-b949-f26ba954f5f8,Namespace:calico-system,Attempt:0,}" Jan 17 00:43:21.803584 kubelet[2596]: E0117 00:43:21.803043 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:21.806960 containerd[1476]: time="2026-01-17T00:43:21.806555632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk5gs,Uid:d4177c8b-a26b-419d-9b18-e9e581c975bb,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:21.845163 containerd[1476]: time="2026-01-17T00:43:21.844735325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n22c9,Uid:ec63a8db-6e49-4fec-8b7a-9f9042c1bf91,Namespace:calico-system,Attempt:0,}" Jan 17 00:43:21.876203 containerd[1476]: time="2026-01-17T00:43:21.875347352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-gwfnp,Uid:61ae5c95-165c-41b7-b9c1-05cec94160e8,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:43:22.447320 containerd[1476]: time="2026-01-17T00:43:22.447261244Z" level=error msg="Failed to destroy network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.463770 containerd[1476]: time="2026-01-17T00:43:22.461176490Z" level=error msg="Failed to destroy network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.463770 containerd[1476]: time="2026-01-17T00:43:22.462234492Z" level=error msg="encountered an error cleaning up failed sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.463770 containerd[1476]: time="2026-01-17T00:43:22.462311335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674c9b8465-rpks6,Uid:084547cb-aa8f-42ba-b949-f26ba954f5f8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.464559 containerd[1476]: time="2026-01-17T00:43:22.464478935Z" level=error msg="Failed to destroy network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.466752 containerd[1476]: time="2026-01-17T00:43:22.465369049Z" level=error msg="encountered an error cleaning up failed sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.466752 containerd[1476]: time="2026-01-17T00:43:22.465427648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jdngt,Uid:fa61c0c6-a39e-4c93-94a9-44f82847e39a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.470985 containerd[1476]: time="2026-01-17T00:43:22.470908260Z" level=error msg="encountered an error cleaning up failed sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.470985 containerd[1476]: time="2026-01-17T00:43:22.470991124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6588bf47fd-nsnxs,Uid:ae016e77-356a-4fd1-8a79-0362524f48fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.472550 kubelet[2596]: E0117 00:43:22.472456 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.472640 kubelet[2596]: E0117 00:43:22.472570 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6588bf47fd-nsnxs" Jan 17 00:43:22.472640 kubelet[2596]: E0117 00:43:22.472599 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6588bf47fd-nsnxs" Jan 17 00:43:22.472723 kubelet[2596]: E0117 00:43:22.472662 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6588bf47fd-nsnxs_calico-system(ae016e77-356a-4fd1-8a79-0362524f48fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6588bf47fd-nsnxs_calico-system(ae016e77-356a-4fd1-8a79-0362524f48fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6588bf47fd-nsnxs" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" Jan 17 00:43:22.473548 kubelet[2596]: E0117 00:43:22.473160 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.473548 kubelet[2596]: E0117 00:43:22.473201 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" Jan 17 00:43:22.473548 kubelet[2596]: E0117 00:43:22.473223 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" Jan 17 00:43:22.473682 kubelet[2596]: E0117 00:43:22.473262 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-674c9b8465-rpks6_calico-system(084547cb-aa8f-42ba-b949-f26ba954f5f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-674c9b8465-rpks6_calico-system(084547cb-aa8f-42ba-b949-f26ba954f5f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:43:22.473682 kubelet[2596]: E0117 00:43:22.473307 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.473682 kubelet[2596]: E0117 00:43:22.473332 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jdngt" Jan 17 00:43:22.473873 kubelet[2596]: E0117 00:43:22.473353 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jdngt" Jan 17 00:43:22.473873 kubelet[2596]: E0117 00:43:22.473422 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:22.480205 containerd[1476]: time="2026-01-17T00:43:22.479294160Z" level=error msg="Failed to destroy network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.488554 containerd[1476]: time="2026-01-17T00:43:22.488425215Z" level=error msg="Failed to destroy network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.490228 containerd[1476]: time="2026-01-17T00:43:22.490164085Z" level=error msg="encountered an error cleaning up failed sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.490331 containerd[1476]: time="2026-01-17T00:43:22.490248502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-msd48,Uid:09a01101-a646-4d50-93a3-7a41aecfea23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.490749 kubelet[2596]: E0117 00:43:22.490683 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.490918 kubelet[2596]: E0117 00:43:22.490759 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" Jan 17 00:43:22.490918 kubelet[2596]: E0117 00:43:22.490786 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" Jan 17 00:43:22.490918 kubelet[2596]: E0117 00:43:22.490898 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d788f98c-msd48_calico-apiserver(09a01101-a646-4d50-93a3-7a41aecfea23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d788f98c-msd48_calico-apiserver(09a01101-a646-4d50-93a3-7a41aecfea23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:43:22.491133 containerd[1476]: time="2026-01-17T00:43:22.491064719Z" level=error msg="encountered an error cleaning up failed sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.491183 containerd[1476]: time="2026-01-17T00:43:22.491163983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j7s62,Uid:1abfdd34-176a-4bd5-8495-196edf2ca012,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.491360 kubelet[2596]: E0117 00:43:22.491296 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.491420 kubelet[2596]: E0117 00:43:22.491361 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j7s62" Jan 17 00:43:22.491420 kubelet[2596]: E0117 00:43:22.491383 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j7s62" Jan 17 00:43:22.491617 kubelet[2596]: E0117 00:43:22.491425 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-j7s62_kube-system(1abfdd34-176a-4bd5-8495-196edf2ca012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-j7s62_kube-system(1abfdd34-176a-4bd5-8495-196edf2ca012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j7s62" podUID="1abfdd34-176a-4bd5-8495-196edf2ca012" Jan 17 00:43:22.501611 containerd[1476]: time="2026-01-17T00:43:22.501361111Z" level=error msg="Failed to destroy network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.523750 containerd[1476]: time="2026-01-17T00:43:22.523650699Z" level=error msg="Failed to destroy network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.524045 containerd[1476]: time="2026-01-17T00:43:22.523757315Z" level=error msg="Failed to destroy network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.524703 containerd[1476]: time="2026-01-17T00:43:22.524482775Z" level=error msg="encountered an error cleaning up failed sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.524703 containerd[1476]: time="2026-01-17T00:43:22.524561671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n22c9,Uid:ec63a8db-6e49-4fec-8b7a-9f9042c1bf91,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.525282 containerd[1476]: time="2026-01-17T00:43:22.525189448Z" level=error msg="encountered an error cleaning up failed sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.525282 containerd[1476]: time="2026-01-17T00:43:22.525259588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-gwfnp,Uid:61ae5c95-165c-41b7-b9c1-05cec94160e8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.526184 kubelet[2596]: E0117 00:43:22.526033 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.526329 kubelet[2596]: E0117 00:43:22.526188 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" Jan 17 00:43:22.526329 kubelet[2596]: E0117 00:43:22.526221 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" Jan 17 00:43:22.526329 kubelet[2596]: E0117 00:43:22.526292 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d788f98c-gwfnp_calico-apiserver(61ae5c95-165c-41b7-b9c1-05cec94160e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d788f98c-gwfnp_calico-apiserver(61ae5c95-165c-41b7-b9c1-05cec94160e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:43:22.526626 kubelet[2596]: E0117 00:43:22.526587 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.526626 kubelet[2596]: E0117 00:43:22.526639 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-n22c9" Jan 17 00:43:22.527298 kubelet[2596]: E0117 00:43:22.526664 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-n22c9" Jan 17 00:43:22.527298 kubelet[2596]: E0117 00:43:22.526784 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-n22c9_calico-system(ec63a8db-6e49-4fec-8b7a-9f9042c1bf91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-n22c9_calico-system(ec63a8db-6e49-4fec-8b7a-9f9042c1bf91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:43:22.562457 containerd[1476]: time="2026-01-17T00:43:22.562044946Z" level=error msg="encountered an error cleaning up failed sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.573202 containerd[1476]: time="2026-01-17T00:43:22.571304812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk5gs,Uid:d4177c8b-a26b-419d-9b18-e9e581c975bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.573342 kubelet[2596]: E0117 00:43:22.571715 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.573342 kubelet[2596]: E0117 00:43:22.572964 2596 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fk5gs" Jan 17 00:43:22.573342 kubelet[2596]: E0117 00:43:22.573000 2596 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fk5gs" Jan 17 00:43:22.573532 kubelet[2596]: E0117 00:43:22.573064 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fk5gs_kube-system(d4177c8b-a26b-419d-9b18-e9e581c975bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fk5gs_kube-system(d4177c8b-a26b-419d-9b18-e9e581c975bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fk5gs" podUID="d4177c8b-a26b-419d-9b18-e9e581c975bb" Jan 17 00:43:22.773167 kubelet[2596]: I0117 00:43:22.772543 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:43:22.780085 kubelet[2596]: I0117 00:43:22.777005 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:43:22.791585 kubelet[2596]: I0117 00:43:22.791544 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:43:22.802796 kubelet[2596]: I0117 00:43:22.802081 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:43:22.814305 containerd[1476]: time="2026-01-17T00:43:22.813329309Z" level=info msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" Jan 17 00:43:22.820478 containerd[1476]: time="2026-01-17T00:43:22.815614034Z" level=info msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" Jan 17 00:43:22.820478 containerd[1476]: time="2026-01-17T00:43:22.813584092Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:43:22.820478 containerd[1476]: time="2026-01-17T00:43:22.818322506Z" level=info msg="Ensure that sandbox 6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed in task-service has been cleanup successfully" Jan 17 00:43:22.820478 containerd[1476]: time="2026-01-17T00:43:22.818478014Z" level=info msg="Ensure that sandbox c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f in task-service has been cleanup successfully" Jan 17 00:43:22.820478 containerd[1476]: time="2026-01-17T00:43:22.819044347Z" level=info msg="Ensure that sandbox abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a in task-service has been cleanup successfully" Jan 17 00:43:22.820968 containerd[1476]: time="2026-01-17T00:43:22.820932532Z" level=info msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" Jan 17 00:43:22.821278 containerd[1476]: time="2026-01-17T00:43:22.821251264Z" level=info msg="Ensure that sandbox 4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a in task-service has been cleanup successfully" Jan 17 00:43:22.823653 kubelet[2596]: I0117 00:43:22.823566 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:43:22.831467 containerd[1476]: time="2026-01-17T00:43:22.831144879Z" level=info msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" Jan 17 00:43:22.831973 containerd[1476]: time="2026-01-17T00:43:22.831882409Z" level=info msg="Ensure that sandbox bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97 in task-service has been cleanup successfully" Jan 17 00:43:22.836915 kubelet[2596]: I0117 00:43:22.834908 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:43:22.838273 containerd[1476]: time="2026-01-17T00:43:22.838195868Z" level=info msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" Jan 17 00:43:22.840942 containerd[1476]: time="2026-01-17T00:43:22.839997202Z" level=info msg="Ensure that sandbox 6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8 in task-service has been cleanup successfully" Jan 17 00:43:22.853593 kubelet[2596]: I0117 00:43:22.851343 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:43:22.857009 containerd[1476]: time="2026-01-17T00:43:22.854980442Z" level=info msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" Jan 17 00:43:22.857009 containerd[1476]: time="2026-01-17T00:43:22.855249742Z" level=info msg="Ensure that sandbox ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac in task-service has been cleanup successfully" Jan 17 00:43:22.863006 kubelet[2596]: I0117 00:43:22.862971 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:43:22.871223 containerd[1476]: time="2026-01-17T00:43:22.871152948Z" level=info msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" Jan 17 00:43:22.872350 containerd[1476]: time="2026-01-17T00:43:22.872248654Z" level=info msg="Ensure that sandbox f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854 in task-service has been cleanup successfully" Jan 17 00:43:22.929956 containerd[1476]: time="2026-01-17T00:43:22.929826053Z" level=error msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" failed" error="failed to destroy network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.930709 kubelet[2596]: E0117 00:43:22.930406 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:43:22.930709 kubelet[2596]: E0117 00:43:22.930520 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a"} Jan 17 00:43:22.930709 kubelet[2596]: E0117 00:43:22.930610 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09a01101-a646-4d50-93a3-7a41aecfea23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:22.930709 kubelet[2596]: E0117 00:43:22.930661 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09a01101-a646-4d50-93a3-7a41aecfea23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:43:22.963408 containerd[1476]: time="2026-01-17T00:43:22.963344713Z" level=error msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" failed" error="failed to destroy network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.964207 kubelet[2596]: E0117 00:43:22.963974 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:43:22.964207 kubelet[2596]: E0117 00:43:22.964040 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8"} Jan 17 00:43:22.964207 kubelet[2596]: E0117 00:43:22.964082 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"084547cb-aa8f-42ba-b949-f26ba954f5f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:22.964207 kubelet[2596]: E0117 00:43:22.964166 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"084547cb-aa8f-42ba-b949-f26ba954f5f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:43:22.964653 containerd[1476]: time="2026-01-17T00:43:22.964622697Z" level=error msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" failed" error="failed to destroy network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.964984 kubelet[2596]: E0117 00:43:22.964954 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:43:22.965232 kubelet[2596]: E0117 00:43:22.965208 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac"} Jan 17 00:43:22.965385 kubelet[2596]: E0117 00:43:22.965317 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:22.965385 kubelet[2596]: E0117 00:43:22.965355 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:22.995325 containerd[1476]: time="2026-01-17T00:43:22.992779946Z" level=error msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" failed" error="failed to destroy network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:22.996243 kubelet[2596]: E0117 00:43:22.996186 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:43:22.996667 kubelet[2596]: E0117 00:43:22.996578 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97"} Jan 17 00:43:22.996819 kubelet[2596]: E0117 00:43:22.996797 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4177c8b-a26b-419d-9b18-e9e581c975bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:22.997054 kubelet[2596]: E0117 00:43:22.997024 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4177c8b-a26b-419d-9b18-e9e581c975bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fk5gs" podUID="d4177c8b-a26b-419d-9b18-e9e581c975bb" Jan 17 00:43:23.000721 containerd[1476]: time="2026-01-17T00:43:23.000358662Z" level=error msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" failed" error="failed to destroy network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:23.001147 kubelet[2596]: E0117 00:43:23.000992 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:43:23.001147 kubelet[2596]: E0117 00:43:23.001073 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f"} Jan 17 00:43:23.001267 kubelet[2596]: E0117 00:43:23.001173 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:23.001267 kubelet[2596]: E0117 00:43:23.001214 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:43:23.005437 containerd[1476]: time="2026-01-17T00:43:23.004967103Z" level=error msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" failed" error="failed to destroy network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:23.005072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8-shm.mount: Deactivated successfully. Jan 17 00:43:23.005274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a-shm.mount: Deactivated successfully. Jan 17 00:43:23.005986 kubelet[2596]: E0117 00:43:23.005532 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:43:23.005986 kubelet[2596]: E0117 00:43:23.005578 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed"} Jan 17 00:43:23.005986 kubelet[2596]: E0117 00:43:23.005616 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61ae5c95-165c-41b7-b9c1-05cec94160e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:23.005986 kubelet[2596]: E0117 00:43:23.005656 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61ae5c95-165c-41b7-b9c1-05cec94160e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:43:23.005389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854-shm.mount: Deactivated successfully. Jan 17 00:43:23.005492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac-shm.mount: Deactivated successfully. Jan 17 00:43:23.005606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a-shm.mount: Deactivated successfully. Jan 17 00:43:23.009550 containerd[1476]: time="2026-01-17T00:43:23.007294084Z" level=error msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" failed" error="failed to destroy network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:23.009677 kubelet[2596]: E0117 00:43:23.008776 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:43:23.009677 kubelet[2596]: E0117 00:43:23.008820 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a"} Jan 17 00:43:23.009677 kubelet[2596]: E0117 00:43:23.008920 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae016e77-356a-4fd1-8a79-0362524f48fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:23.009677 kubelet[2596]: E0117 00:43:23.008956 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae016e77-356a-4fd1-8a79-0362524f48fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6588bf47fd-nsnxs" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" Jan 17 00:43:23.013276 containerd[1476]: time="2026-01-17T00:43:23.013201325Z" level=error msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" failed" error="failed to destroy network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:23.013643 kubelet[2596]: E0117 00:43:23.013520 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:43:23.013643 kubelet[2596]: E0117 00:43:23.013588 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854"} Jan 17 00:43:23.013643 kubelet[2596]: E0117 00:43:23.013637 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1abfdd34-176a-4bd5-8495-196edf2ca012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:23.013867 kubelet[2596]: E0117 00:43:23.013677 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1abfdd34-176a-4bd5-8495-196edf2ca012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j7s62" podUID="1abfdd34-176a-4bd5-8495-196edf2ca012" Jan 17 00:43:47.652773 systemd[1]: cri-containerd-c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8.scope: Deactivated successfully. Jan 17 00:43:47.664792 systemd[1]: cri-containerd-c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8.scope: Consumed 17.773s CPU time, 19.5M memory peak, 0B memory swap peak. Jan 17 00:43:48.039445 containerd[1476]: time="2026-01-17T00:43:48.039304918Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:43:48.041411 kubelet[2596]: E0117 00:43:48.039620 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:48.045307 kubelet[2596]: E0117 00:43:48.044421 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:48.048586 systemd[1]: cri-containerd-0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f.scope: Deactivated successfully. Jan 17 00:43:48.050906 systemd[1]: cri-containerd-0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f.scope: Consumed 9.211s CPU time, 16.2M memory peak, 0B memory swap peak. Jan 17 00:43:48.056757 containerd[1476]: time="2026-01-17T00:43:48.047894939Z" level=info msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" Jan 17 00:43:48.065261 containerd[1476]: time="2026-01-17T00:43:48.047931527Z" level=info msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" Jan 17 00:43:48.074459 containerd[1476]: time="2026-01-17T00:43:48.047960391Z" level=info msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" Jan 17 00:43:48.078196 containerd[1476]: time="2026-01-17T00:43:48.048272033Z" level=info msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" Jan 17 00:43:48.084398 containerd[1476]: time="2026-01-17T00:43:48.047990958Z" level=info msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" Jan 17 00:43:48.084718 containerd[1476]: time="2026-01-17T00:43:48.048174160Z" level=info msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" Jan 17 00:43:48.086981 containerd[1476]: time="2026-01-17T00:43:48.048237869Z" level=info msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" Jan 17 00:43:48.289406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8-rootfs.mount: Deactivated successfully. Jan 17 00:43:48.453652 containerd[1476]: time="2026-01-17T00:43:48.448386411Z" level=info msg="shim disconnected" id=c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8 namespace=k8s.io Jan 17 00:43:48.453652 containerd[1476]: time="2026-01-17T00:43:48.448522395Z" level=warning msg="cleaning up after shim disconnected" id=c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8 namespace=k8s.io Jan 17 00:43:48.453652 containerd[1476]: time="2026-01-17T00:43:48.448554335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:48.521889 containerd[1476]: time="2026-01-17T00:43:48.521648494Z" level=error msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" failed" error="failed to destroy network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.522218 kubelet[2596]: E0117 00:43:48.522070 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:43:48.522422 kubelet[2596]: E0117 00:43:48.522214 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854"} Jan 17 00:43:48.522422 kubelet[2596]: E0117 00:43:48.522328 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1abfdd34-176a-4bd5-8495-196edf2ca012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.522422 kubelet[2596]: E0117 00:43:48.522375 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1abfdd34-176a-4bd5-8495-196edf2ca012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j7s62" podUID="1abfdd34-176a-4bd5-8495-196edf2ca012" Jan 17 00:43:48.524203 containerd[1476]: time="2026-01-17T00:43:48.524080897Z" level=error msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" failed" error="failed to destroy network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.533765 kubelet[2596]: E0117 00:43:48.525149 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:43:48.533765 kubelet[2596]: E0117 00:43:48.525226 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac"} Jan 17 00:43:48.533765 kubelet[2596]: E0117 00:43:48.525270 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.533765 kubelet[2596]: E0117 00:43:48.533615 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:43:48.536372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f-rootfs.mount: Deactivated successfully. Jan 17 00:43:48.571527 containerd[1476]: time="2026-01-17T00:43:48.570420629Z" level=error msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" failed" error="failed to destroy network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.578531 kubelet[2596]: E0117 00:43:48.571136 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:43:48.578721 kubelet[2596]: E0117 00:43:48.578557 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f"} Jan 17 00:43:48.578788 kubelet[2596]: E0117 00:43:48.578687 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.579259 kubelet[2596]: E0117 00:43:48.579183 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:43:48.593483 containerd[1476]: time="2026-01-17T00:43:48.593419035Z" level=error msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" failed" error="failed to destroy network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.597892 kubelet[2596]: E0117 00:43:48.597835 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:43:48.598398 kubelet[2596]: E0117 00:43:48.598292 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97"} Jan 17 00:43:48.598625 kubelet[2596]: E0117 00:43:48.598596 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4177c8b-a26b-419d-9b18-e9e581c975bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.598976 kubelet[2596]: E0117 00:43:48.598941 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4177c8b-a26b-419d-9b18-e9e581c975bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fk5gs" podUID="d4177c8b-a26b-419d-9b18-e9e581c975bb" Jan 17 00:43:48.627995 containerd[1476]: time="2026-01-17T00:43:48.620421536Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:43:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:43:48.634519 containerd[1476]: time="2026-01-17T00:43:48.634409659Z" level=info msg="shim disconnected" id=0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f namespace=k8s.io Jan 17 00:43:48.634869 containerd[1476]: time="2026-01-17T00:43:48.634779871Z" level=warning msg="cleaning up after shim disconnected" id=0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f namespace=k8s.io Jan 17 00:43:48.635036 containerd[1476]: time="2026-01-17T00:43:48.635012185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:48.688488 containerd[1476]: time="2026-01-17T00:43:48.685266417Z" level=error msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" failed" error="failed to destroy network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.688709 kubelet[2596]: E0117 00:43:48.685657 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:43:48.688709 kubelet[2596]: E0117 00:43:48.685730 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a"} Jan 17 00:43:48.688709 kubelet[2596]: E0117 00:43:48.685777 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae016e77-356a-4fd1-8a79-0362524f48fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.689167 kubelet[2596]: E0117 00:43:48.688182 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae016e77-356a-4fd1-8a79-0362524f48fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6588bf47fd-nsnxs" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" Jan 17 00:43:48.689517 containerd[1476]: time="2026-01-17T00:43:48.689416784Z" level=error msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" failed" error="failed to destroy network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.689744 kubelet[2596]: E0117 00:43:48.689648 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:43:48.689835 kubelet[2596]: E0117 00:43:48.689755 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a"} Jan 17 00:43:48.689910 kubelet[2596]: E0117 00:43:48.689796 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09a01101-a646-4d50-93a3-7a41aecfea23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.689991 kubelet[2596]: E0117 00:43:48.689875 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09a01101-a646-4d50-93a3-7a41aecfea23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:43:48.703843 containerd[1476]: time="2026-01-17T00:43:48.703740344Z" level=error msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" failed" error="failed to destroy network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.704575 kubelet[2596]: E0117 00:43:48.704353 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:43:48.704575 kubelet[2596]: E0117 00:43:48.704419 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed"} Jan 17 00:43:48.704575 kubelet[2596]: E0117 00:43:48.704467 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61ae5c95-165c-41b7-b9c1-05cec94160e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.704575 kubelet[2596]: E0117 00:43:48.704511 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61ae5c95-165c-41b7-b9c1-05cec94160e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:43:48.709913 containerd[1476]: time="2026-01-17T00:43:48.708851509Z" level=error msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" failed" error="failed to destroy network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:48.710035 kubelet[2596]: E0117 00:43:48.709973 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:43:48.710636 kubelet[2596]: E0117 00:43:48.710433 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8"} Jan 17 00:43:48.711588 kubelet[2596]: E0117 00:43:48.711499 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"084547cb-aa8f-42ba-b949-f26ba954f5f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:48.713585 kubelet[2596]: E0117 00:43:48.711954 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"084547cb-aa8f-42ba-b949-f26ba954f5f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:43:48.753520 containerd[1476]: time="2026-01-17T00:43:48.752241773Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:43:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:43:49.022177 kubelet[2596]: I0117 00:43:49.016679 2596 scope.go:117] "RemoveContainer" containerID="c75149b03547cd609f309a1e587c484734a553028114de8ad3681140677843a8" Jan 17 00:43:49.022177 kubelet[2596]: E0117 00:43:49.016776 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:49.028068 kubelet[2596]: I0117 00:43:49.027178 2596 scope.go:117] "RemoveContainer" containerID="0431d9ed6c8a6cb8d20f9ba1e616f6ae785a2efb596916693bf05e119420d78f" Jan 17 00:43:49.028068 kubelet[2596]: E0117 00:43:49.027258 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:49.037939 containerd[1476]: time="2026-01-17T00:43:49.034841945Z" level=info msg="CreateContainer within sandbox \"cde36c37d5c6b7cfbfb5df7de2f69817f93485c45849027276f053c62b0c5db2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:43:49.042388 containerd[1476]: time="2026-01-17T00:43:49.038711603Z" level=info msg="CreateContainer within sandbox \"9c0b57c43d70d785e2f26ecf639266e8f034fad1addcf65c39f9fd69e286e1a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:43:49.212041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037485709.mount: Deactivated successfully. Jan 17 00:43:49.333594 containerd[1476]: time="2026-01-17T00:43:49.316667771Z" level=info msg="CreateContainer within sandbox \"cde36c37d5c6b7cfbfb5df7de2f69817f93485c45849027276f053c62b0c5db2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"175eaa132e4d5b1c7eac5ae2aa9d8fdf3c5ce17214480d1b06dd7cb9119f194e\"" Jan 17 00:43:49.333594 containerd[1476]: time="2026-01-17T00:43:49.321797720Z" level=info msg="StartContainer for \"175eaa132e4d5b1c7eac5ae2aa9d8fdf3c5ce17214480d1b06dd7cb9119f194e\"" Jan 17 00:43:49.388955 containerd[1476]: time="2026-01-17T00:43:49.382062739Z" level=info msg="CreateContainer within sandbox \"9c0b57c43d70d785e2f26ecf639266e8f034fad1addcf65c39f9fd69e286e1a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"29e230d3004882a0fd08a3817ca8f9efb1bc2f2b8edc37614b39eafed74eb14f\"" Jan 17 00:43:49.402581 containerd[1476]: time="2026-01-17T00:43:49.394196210Z" level=info msg="StartContainer for \"29e230d3004882a0fd08a3817ca8f9efb1bc2f2b8edc37614b39eafed74eb14f\"" Jan 17 00:43:49.645663 systemd[1]: Started cri-containerd-175eaa132e4d5b1c7eac5ae2aa9d8fdf3c5ce17214480d1b06dd7cb9119f194e.scope - libcontainer container 175eaa132e4d5b1c7eac5ae2aa9d8fdf3c5ce17214480d1b06dd7cb9119f194e. Jan 17 00:43:49.693314 systemd[1]: Started cri-containerd-29e230d3004882a0fd08a3817ca8f9efb1bc2f2b8edc37614b39eafed74eb14f.scope - libcontainer container 29e230d3004882a0fd08a3817ca8f9efb1bc2f2b8edc37614b39eafed74eb14f. Jan 17 00:43:49.976717 containerd[1476]: time="2026-01-17T00:43:49.976630324Z" level=info msg="StartContainer for \"175eaa132e4d5b1c7eac5ae2aa9d8fdf3c5ce17214480d1b06dd7cb9119f194e\" returns successfully" Jan 17 00:43:49.996527 containerd[1476]: time="2026-01-17T00:43:49.993792465Z" level=info msg="StartContainer for \"29e230d3004882a0fd08a3817ca8f9efb1bc2f2b8edc37614b39eafed74eb14f\" returns successfully" Jan 17 00:43:50.060568 kubelet[2596]: E0117 00:43:50.059493 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:50.066964 kubelet[2596]: E0117 00:43:50.063742 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:51.083697 kubelet[2596]: E0117 00:43:51.079716 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:52.100742 kubelet[2596]: E0117 00:43:52.099451 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:57.649738 systemd[1]: cri-containerd-8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb.scope: Deactivated successfully. Jan 17 00:43:57.650283 systemd[1]: cri-containerd-8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb.scope: Consumed 12.403s CPU time. Jan 17 00:43:57.753345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb-rootfs.mount: Deactivated successfully. Jan 17 00:43:58.015489 containerd[1476]: time="2026-01-17T00:43:58.015299699Z" level=info msg="shim disconnected" id=8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb namespace=k8s.io Jan 17 00:43:58.015489 containerd[1476]: time="2026-01-17T00:43:58.015369930Z" level=warning msg="cleaning up after shim disconnected" id=8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb namespace=k8s.io Jan 17 00:43:58.015489 containerd[1476]: time="2026-01-17T00:43:58.015386110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:58.165904 kubelet[2596]: I0117 00:43:58.165803 2596 scope.go:117] "RemoveContainer" containerID="8aee2efc8ae75fcfb15e740cedddc9c953914e5b4d94b08dd8a52fdd986a3acb" Jan 17 00:43:58.177141 containerd[1476]: time="2026-01-17T00:43:58.175534656Z" level=info msg="CreateContainer within sandbox \"4ef47ae8812d086c555d092e8659c83fe8c23ce63d3096c06999ca231954d187\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:43:58.226466 containerd[1476]: time="2026-01-17T00:43:58.226318389Z" level=info msg="CreateContainer within sandbox \"4ef47ae8812d086c555d092e8659c83fe8c23ce63d3096c06999ca231954d187\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"979d526e961b70737d21bf5f16c4570f321d09ccf11bf75ce84af3ea6c38578b\"" Jan 17 00:43:58.227986 containerd[1476]: time="2026-01-17T00:43:58.227957637Z" level=info msg="StartContainer for \"979d526e961b70737d21bf5f16c4570f321d09ccf11bf75ce84af3ea6c38578b\"" Jan 17 00:43:58.228903 kubelet[2596]: E0117 00:43:58.228770 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:58.331158 systemd[1]: Started cri-containerd-979d526e961b70737d21bf5f16c4570f321d09ccf11bf75ce84af3ea6c38578b.scope - libcontainer container 979d526e961b70737d21bf5f16c4570f321d09ccf11bf75ce84af3ea6c38578b. Jan 17 00:43:58.401578 containerd[1476]: time="2026-01-17T00:43:58.401519393Z" level=info msg="StartContainer for \"979d526e961b70737d21bf5f16c4570f321d09ccf11bf75ce84af3ea6c38578b\" returns successfully" Jan 17 00:43:59.399572 kubelet[2596]: E0117 00:43:59.397018 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:59.643814 containerd[1476]: time="2026-01-17T00:43:59.641513511Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:43:59.848566 containerd[1476]: time="2026-01-17T00:43:59.848438979Z" level=error msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" failed" error="failed to destroy network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:43:59.849395 kubelet[2596]: E0117 00:43:59.849203 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:43:59.849395 kubelet[2596]: E0117 00:43:59.849313 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a"} Jan 17 00:43:59.849395 kubelet[2596]: E0117 00:43:59.849369 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae016e77-356a-4fd1-8a79-0362524f48fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:43:59.849672 kubelet[2596]: E0117 00:43:59.849417 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae016e77-356a-4fd1-8a79-0362524f48fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6588bf47fd-nsnxs" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" Jan 17 00:44:00.205326 kubelet[2596]: E0117 00:44:00.205251 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:00.648235 containerd[1476]: time="2026-01-17T00:44:00.645191568Z" level=info msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" Jan 17 00:44:00.648235 containerd[1476]: time="2026-01-17T00:44:00.647977175Z" level=info msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" Jan 17 00:44:00.792552 containerd[1476]: time="2026-01-17T00:44:00.791809291Z" level=error msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" failed" error="failed to destroy network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:44:00.792715 kubelet[2596]: E0117 00:44:00.792190 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:00.792715 kubelet[2596]: E0117 00:44:00.792254 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed"} Jan 17 00:44:00.792715 kubelet[2596]: E0117 00:44:00.792295 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61ae5c95-165c-41b7-b9c1-05cec94160e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:44:00.792715 kubelet[2596]: E0117 00:44:00.792404 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61ae5c95-165c-41b7-b9c1-05cec94160e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:44:00.806265 containerd[1476]: time="2026-01-17T00:44:00.806075403Z" level=error msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" failed" error="failed to destroy network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:44:00.816975 kubelet[2596]: E0117 00:44:00.816308 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:00.816975 kubelet[2596]: E0117 00:44:00.816377 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97"} Jan 17 00:44:00.816975 kubelet[2596]: E0117 00:44:00.816417 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4177c8b-a26b-419d-9b18-e9e581c975bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:44:00.816975 kubelet[2596]: E0117 00:44:00.816453 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4177c8b-a26b-419d-9b18-e9e581c975bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fk5gs" podUID="d4177c8b-a26b-419d-9b18-e9e581c975bb" Jan 17 00:44:01.210748 kubelet[2596]: E0117 00:44:01.210665 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:01.641292 kubelet[2596]: E0117 00:44:01.639769 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:01.642997 containerd[1476]: time="2026-01-17T00:44:01.642748608Z" level=info msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" Jan 17 00:44:01.644371 containerd[1476]: time="2026-01-17T00:44:01.642785259Z" level=info msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" Jan 17 00:44:01.740468 containerd[1476]: time="2026-01-17T00:44:01.740082882Z" level=error msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" failed" error="failed to destroy network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:44:01.744432 kubelet[2596]: E0117 00:44:01.740747 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:01.744432 kubelet[2596]: E0117 00:44:01.740828 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac"} Jan 17 00:44:01.744432 kubelet[2596]: E0117 00:44:01.740904 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:44:01.744432 kubelet[2596]: E0117 00:44:01.740932 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa61c0c6-a39e-4c93-94a9-44f82847e39a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:44:01.762839 containerd[1476]: time="2026-01-17T00:44:01.762461212Z" level=error msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" failed" error="failed to destroy network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:44:01.766242 kubelet[2596]: E0117 00:44:01.764621 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:01.766242 kubelet[2596]: E0117 00:44:01.764701 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a"} Jan 17 00:44:01.766242 kubelet[2596]: E0117 00:44:01.764762 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09a01101-a646-4d50-93a3-7a41aecfea23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:44:01.766242 kubelet[2596]: E0117 00:44:01.764806 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09a01101-a646-4d50-93a3-7a41aecfea23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:44:02.641956 containerd[1476]: time="2026-01-17T00:44:02.641400711Z" level=info msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" Jan 17 00:44:02.641956 containerd[1476]: time="2026-01-17T00:44:02.641590068Z" level=info msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" Jan 17 00:44:02.741694 containerd[1476]: time="2026-01-17T00:44:02.741594824Z" level=error msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" failed" error="failed to destroy network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:44:02.742959 kubelet[2596]: E0117 00:44:02.742139 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:02.742959 kubelet[2596]: E0117 00:44:02.742222 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f"} Jan 17 00:44:02.742959 kubelet[2596]: E0117 00:44:02.742269 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:44:02.742959 kubelet[2596]: E0117 00:44:02.742317 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:44:02.781989 containerd[1476]: time="2026-01-17T00:44:02.781879842Z" level=error msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" failed" error="failed to destroy network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:44:02.783185 kubelet[2596]: E0117 00:44:02.783034 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:02.783824 kubelet[2596]: E0117 00:44:02.783302 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8"} Jan 17 00:44:02.783824 kubelet[2596]: E0117 00:44:02.783382 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"084547cb-aa8f-42ba-b949-f26ba954f5f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:44:02.783824 kubelet[2596]: E0117 00:44:02.783427 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"084547cb-aa8f-42ba-b949-f26ba954f5f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:44:03.022468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835580368.mount: Deactivated successfully. Jan 17 00:44:03.142292 containerd[1476]: time="2026-01-17T00:44:03.142173222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:03.144815 containerd[1476]: time="2026-01-17T00:44:03.144622141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:44:03.146903 containerd[1476]: time="2026-01-17T00:44:03.146756092Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:03.162423 containerd[1476]: time="2026-01-17T00:44:03.161726378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:03.163553 containerd[1476]: time="2026-01-17T00:44:03.162814490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 41.41437393s" Jan 17 00:44:03.163553 containerd[1476]: time="2026-01-17T00:44:03.163072763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:44:03.212625 containerd[1476]: time="2026-01-17T00:44:03.212532109Z" level=info msg="CreateContainer within sandbox \"566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:44:03.251683 containerd[1476]: time="2026-01-17T00:44:03.251295522Z" level=info msg="CreateContainer within sandbox \"566f92bf6f288959c7e35bad5082a66637128e105b7df05d635d27a2f3426c4f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8377109b8079b543184508a4a3ba20bba2b49e18adf2255702043d597e3029e3\"" Jan 17 00:44:03.252571 containerd[1476]: time="2026-01-17T00:44:03.252454089Z" level=info msg="StartContainer for \"8377109b8079b543184508a4a3ba20bba2b49e18adf2255702043d597e3029e3\"" Jan 17 00:44:03.255560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002970130.mount: Deactivated successfully. Jan 17 00:44:03.348691 systemd[1]: Started cri-containerd-8377109b8079b543184508a4a3ba20bba2b49e18adf2255702043d597e3029e3.scope - libcontainer container 8377109b8079b543184508a4a3ba20bba2b49e18adf2255702043d597e3029e3. Jan 17 00:44:03.542282 containerd[1476]: time="2026-01-17T00:44:03.541406379Z" level=info msg="StartContainer for \"8377109b8079b543184508a4a3ba20bba2b49e18adf2255702043d597e3029e3\" returns successfully" Jan 17 00:44:03.641820 containerd[1476]: time="2026-01-17T00:44:03.640412676Z" level=info msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" Jan 17 00:44:03.683247 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:44:03.685056 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:44:03.736794 containerd[1476]: time="2026-01-17T00:44:03.736732963Z" level=error msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" failed" error="failed to destroy network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:44:03.738369 kubelet[2596]: E0117 00:44:03.737764 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:03.738369 kubelet[2596]: E0117 00:44:03.737824 2596 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854"} Jan 17 00:44:03.738369 kubelet[2596]: E0117 00:44:03.737905 2596 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1abfdd34-176a-4bd5-8495-196edf2ca012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:44:03.738369 kubelet[2596]: E0117 00:44:03.737962 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1abfdd34-176a-4bd5-8495-196edf2ca012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j7s62" podUID="1abfdd34-176a-4bd5-8495-196edf2ca012" Jan 17 00:44:04.231056 kubelet[2596]: E0117 00:44:04.230942 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:04.267343 kubelet[2596]: I0117 00:44:04.267231 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-72g2h" podStartSLOduration=2.541772221 podStartE2EDuration="59.26720826s" podCreationTimestamp="2026-01-17 00:43:05 +0000 UTC" firstStartedPulling="2026-01-17 00:43:06.444157097 +0000 UTC m=+40.159649454" lastFinishedPulling="2026-01-17 00:44:03.169593127 +0000 UTC m=+96.885085493" observedRunningTime="2026-01-17 00:44:04.259388294 +0000 UTC m=+97.974880650" watchObservedRunningTime="2026-01-17 00:44:04.26720826 +0000 UTC m=+97.982700655" Jan 17 00:44:05.245972 kubelet[2596]: E0117 00:44:05.245218 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:06.008177 kernel: bpftool[4632]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:44:06.520169 systemd-networkd[1373]: vxlan.calico: Link UP Jan 17 00:44:06.520183 systemd-networkd[1373]: vxlan.calico: Gained carrier Jan 17 00:44:08.195787 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Jan 17 00:44:08.241434 kubelet[2596]: E0117 00:44:08.240312 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:11.537644 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:53326.service - OpenSSH per-connection server daemon (10.0.0.1:53326). Jan 17 00:44:11.637549 sshd[4731]: Accepted publickey for core from 10.0.0.1 port 53326 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:11.640535 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:11.656468 systemd-logind[1459]: New session 8 of user core. Jan 17 00:44:11.670507 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:44:11.920375 sshd[4731]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:11.927342 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:53326.service: Deactivated successfully. Jan 17 00:44:11.932238 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:44:11.934738 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:44:11.937154 systemd-logind[1459]: Removed session 8. Jan 17 00:44:12.643168 containerd[1476]: time="2026-01-17T00:44:12.640756279Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:44:12.643168 containerd[1476]: time="2026-01-17T00:44:12.641338157Z" level=info msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.818 [INFO][4779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.818 [INFO][4779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" iface="eth0" netns="/var/run/netns/cni-9a659270-316e-dbda-e1f7-7f4dd1aba0e9" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.822 [INFO][4779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" iface="eth0" netns="/var/run/netns/cni-9a659270-316e-dbda-e1f7-7f4dd1aba0e9" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.827 [INFO][4779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" iface="eth0" netns="/var/run/netns/cni-9a659270-316e-dbda-e1f7-7f4dd1aba0e9" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.827 [INFO][4779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.827 [INFO][4779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.979 [INFO][4797] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.985 [INFO][4797] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:12.986 [INFO][4797] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:13.017 [WARNING][4797] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:13.018 [INFO][4797] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:13.025 [INFO][4797] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:13.039871 containerd[1476]: 2026-01-17 00:44:13.036 [INFO][4779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:13.043765 containerd[1476]: time="2026-01-17T00:44:13.043568478Z" level=info msg="TearDown network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" successfully" Jan 17 00:44:13.043765 containerd[1476]: time="2026-01-17T00:44:13.043608384Z" level=info msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" returns successfully" Jan 17 00:44:13.046566 systemd[1]: run-netns-cni\x2d9a659270\x2d316e\x2ddbda\x2de1f7\x2d7f4dd1aba0e9.mount: Deactivated successfully. Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.812 [INFO][4780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.813 [INFO][4780] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" iface="eth0" netns="/var/run/netns/cni-13055cfb-4349-9c4f-983d-d5892408eed3" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.817 [INFO][4780] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" iface="eth0" netns="/var/run/netns/cni-13055cfb-4349-9c4f-983d-d5892408eed3" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.822 [INFO][4780] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" iface="eth0" netns="/var/run/netns/cni-13055cfb-4349-9c4f-983d-d5892408eed3" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.822 [INFO][4780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.822 [INFO][4780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.979 [INFO][4795] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:12.987 [INFO][4795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:13.025 [INFO][4795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:13.051 [WARNING][4795] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:13.051 [INFO][4795] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:13.055 [INFO][4795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:13.063008 containerd[1476]: 2026-01-17 00:44:13.059 [INFO][4780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:13.067757 containerd[1476]: time="2026-01-17T00:44:13.063333329Z" level=info msg="TearDown network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" successfully" Jan 17 00:44:13.067757 containerd[1476]: time="2026-01-17T00:44:13.063431853Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" returns successfully" Jan 17 00:44:13.070154 systemd[1]: run-netns-cni\x2d13055cfb\x2d4349\x2d9c4f\x2d983d\x2dd5892408eed3.mount: Deactivated successfully. Jan 17 00:44:13.073459 kubelet[2596]: E0117 00:44:13.072558 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:13.073982 containerd[1476]: time="2026-01-17T00:44:13.073408644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk5gs,Uid:d4177c8b-a26b-419d-9b18-e9e581c975bb,Namespace:kube-system,Attempt:1,}" Jan 17 00:44:13.077233 containerd[1476]: time="2026-01-17T00:44:13.077062719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6588bf47fd-nsnxs,Uid:ae016e77-356a-4fd1-8a79-0362524f48fd,Namespace:calico-system,Attempt:1,}" Jan 17 00:44:13.386981 systemd-networkd[1373]: calibcb3e4e4de2: Link UP Jan 17 00:44:13.389064 systemd-networkd[1373]: calibcb3e4e4de2: Gained carrier Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.188 [INFO][4812] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--fk5gs-eth0 coredns-66bc5c9577- kube-system d4177c8b-a26b-419d-9b18-e9e581c975bb 1095 0 2026-01-17 00:42:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-fk5gs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibcb3e4e4de2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.188 [INFO][4812] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.264 [INFO][4841] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" HandleID="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.264 [INFO][4841] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" HandleID="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135f10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-fk5gs", "timestamp":"2026-01-17 00:44:13.264302823 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.264 [INFO][4841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.265 [INFO][4841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.265 [INFO][4841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.286 [INFO][4841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.301 [INFO][4841] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.315 [INFO][4841] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.319 [INFO][4841] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.325 [INFO][4841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.325 [INFO][4841] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.334 [INFO][4841] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.361 [INFO][4841] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.376 [INFO][4841] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.376 [INFO][4841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" host="localhost" Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.376 [INFO][4841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:13.437750 containerd[1476]: 2026-01-17 00:44:13.377 [INFO][4841] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" HandleID="k8s-pod-network.676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.438689 containerd[1476]: 2026-01-17 00:44:13.380 [INFO][4812] cni-plugin/k8s.go 418: Populated endpoint ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fk5gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4177c8b-a26b-419d-9b18-e9e581c975bb", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-fk5gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcb3e4e4de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:13.438689 containerd[1476]: 2026-01-17 00:44:13.381 [INFO][4812] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.438689 containerd[1476]: 2026-01-17 00:44:13.381 [INFO][4812] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibcb3e4e4de2 ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.438689 containerd[1476]: 2026-01-17 00:44:13.390 [INFO][4812] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.438689 containerd[1476]: 2026-01-17 00:44:13.392 [INFO][4812] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fk5gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4177c8b-a26b-419d-9b18-e9e581c975bb", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e", Pod:"coredns-66bc5c9577-fk5gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcb3e4e4de2", MAC:"2a:8f:8c:55:53:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:13.438689 containerd[1476]: 2026-01-17 00:44:13.419 [INFO][4812] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e" Namespace="kube-system" Pod="coredns-66bc5c9577-fk5gs" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:13.523023 systemd-networkd[1373]: cali3d94e4e5a56: Link UP Jan 17 00:44:13.523952 systemd-networkd[1373]: cali3d94e4e5a56: Gained carrier Jan 17 00:44:13.528240 containerd[1476]: time="2026-01-17T00:44:13.526618124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:13.528240 containerd[1476]: time="2026-01-17T00:44:13.526724432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:13.528240 containerd[1476]: time="2026-01-17T00:44:13.526741174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:13.528240 containerd[1476]: time="2026-01-17T00:44:13.527021930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.214 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6588bf47fd--nsnxs-eth0 whisker-6588bf47fd- calico-system ae016e77-356a-4fd1-8a79-0362524f48fd 1094 0 2026-01-17 00:43:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6588bf47fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6588bf47fd-nsnxs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3d94e4e5a56 [] [] }} ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.214 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.275 [INFO][4848] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.276 [INFO][4848] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6588bf47fd-nsnxs", "timestamp":"2026-01-17 00:44:13.275964952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.276 [INFO][4848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.376 [INFO][4848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.378 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.400 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.446 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.462 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.466 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.472 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.472 [INFO][4848] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.476 [INFO][4848] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5 Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.487 [INFO][4848] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.514 [INFO][4848] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.514 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" host="localhost" Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.514 [INFO][4848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:13.565893 containerd[1476]: 2026-01-17 00:44:13.514 [INFO][4848] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.569025 containerd[1476]: 2026-01-17 00:44:13.520 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6588bf47fd--nsnxs-eth0", GenerateName:"whisker-6588bf47fd-", Namespace:"calico-system", SelfLink:"", UID:"ae016e77-356a-4fd1-8a79-0362524f48fd", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6588bf47fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6588bf47fd-nsnxs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d94e4e5a56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:13.569025 containerd[1476]: 2026-01-17 00:44:13.520 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.569025 containerd[1476]: 2026-01-17 00:44:13.520 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d94e4e5a56 ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.569025 containerd[1476]: 2026-01-17 00:44:13.524 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.569025 containerd[1476]: 2026-01-17 00:44:13.524 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6588bf47fd--nsnxs-eth0", GenerateName:"whisker-6588bf47fd-", Namespace:"calico-system", SelfLink:"", UID:"ae016e77-356a-4fd1-8a79-0362524f48fd", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6588bf47fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5", Pod:"whisker-6588bf47fd-nsnxs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d94e4e5a56", MAC:"7e:9d:9c:3a:86:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:13.569025 containerd[1476]: 2026-01-17 00:44:13.558 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Namespace="calico-system" Pod="whisker-6588bf47fd-nsnxs" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:13.572321 systemd[1]: Started cri-containerd-676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e.scope - libcontainer container 676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e. Jan 17 00:44:13.607954 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:13.610085 containerd[1476]: time="2026-01-17T00:44:13.609531262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:13.610085 containerd[1476]: time="2026-01-17T00:44:13.609592657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:13.610085 containerd[1476]: time="2026-01-17T00:44:13.609606553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:13.610085 containerd[1476]: time="2026-01-17T00:44:13.609697753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:13.642307 containerd[1476]: time="2026-01-17T00:44:13.641709677Z" level=info msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" Jan 17 00:44:13.644611 systemd[1]: Started cri-containerd-c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5.scope - libcontainer container c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5. Jan 17 00:44:13.664306 containerd[1476]: time="2026-01-17T00:44:13.664204096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk5gs,Uid:d4177c8b-a26b-419d-9b18-e9e581c975bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e\"" Jan 17 00:44:13.665391 kubelet[2596]: E0117 00:44:13.665330 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:13.673018 containerd[1476]: time="2026-01-17T00:44:13.672837538Z" level=info msg="CreateContainer within sandbox \"676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:44:13.696825 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:13.714856 containerd[1476]: time="2026-01-17T00:44:13.714448400Z" level=info msg="CreateContainer within sandbox \"676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41bf2ebeb18a5f051d414bb6e9213e01891932a70d0019a92d17d7d7fa2e8b94\"" Jan 17 00:44:13.719970 containerd[1476]: time="2026-01-17T00:44:13.718466380Z" level=info msg="StartContainer for \"41bf2ebeb18a5f051d414bb6e9213e01891932a70d0019a92d17d7d7fa2e8b94\"" Jan 17 00:44:13.763554 containerd[1476]: time="2026-01-17T00:44:13.763442864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6588bf47fd-nsnxs,Uid:ae016e77-356a-4fd1-8a79-0362524f48fd,Namespace:calico-system,Attempt:1,} returns sandbox id \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\"" Jan 17 00:44:13.767762 containerd[1476]: time="2026-01-17T00:44:13.767587357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:44:13.820458 systemd[1]: Started cri-containerd-41bf2ebeb18a5f051d414bb6e9213e01891932a70d0019a92d17d7d7fa2e8b94.scope - libcontainer container 41bf2ebeb18a5f051d414bb6e9213e01891932a70d0019a92d17d7d7fa2e8b94. Jan 17 00:44:13.866504 containerd[1476]: time="2026-01-17T00:44:13.866296060Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.779 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.781 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" iface="eth0" netns="/var/run/netns/cni-7df363e4-578f-7408-69d6-c9da70ec1949" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.784 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" iface="eth0" netns="/var/run/netns/cni-7df363e4-578f-7408-69d6-c9da70ec1949" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.784 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" iface="eth0" netns="/var/run/netns/cni-7df363e4-578f-7408-69d6-c9da70ec1949" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.784 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.784 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.856 [INFO][5004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.858 [INFO][5004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.859 [INFO][5004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.870 [WARNING][5004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.870 [INFO][5004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.876 [INFO][5004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:13.886023 containerd[1476]: 2026-01-17 00:44:13.880 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:13.896992 containerd[1476]: time="2026-01-17T00:44:13.868464679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:44:13.896992 containerd[1476]: time="2026-01-17T00:44:13.869008366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:44:13.896992 containerd[1476]: time="2026-01-17T00:44:13.885831288Z" level=info msg="TearDown network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" successfully" Jan 17 00:44:13.896992 containerd[1476]: time="2026-01-17T00:44:13.895496536Z" level=info msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" returns successfully" Jan 17 00:44:13.898411 kubelet[2596]: E0117 00:44:13.896145 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:13.898411 kubelet[2596]: E0117 00:44:13.896227 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:13.898411 kubelet[2596]: E0117 00:44:13.896373 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6588bf47fd-nsnxs_calico-system(ae016e77-356a-4fd1-8a79-0362524f48fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:13.898703 containerd[1476]: time="2026-01-17T00:44:13.898215574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:44:13.912815 containerd[1476]: time="2026-01-17T00:44:13.912655828Z" level=info msg="StartContainer for \"41bf2ebeb18a5f051d414bb6e9213e01891932a70d0019a92d17d7d7fa2e8b94\" returns successfully" Jan 17 00:44:13.916031 containerd[1476]: time="2026-01-17T00:44:13.915590999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-msd48,Uid:09a01101-a646-4d50-93a3-7a41aecfea23,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:44:13.970128 containerd[1476]: time="2026-01-17T00:44:13.969965917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:13.972564 containerd[1476]: time="2026-01-17T00:44:13.972480875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:44:13.972681 containerd[1476]: time="2026-01-17T00:44:13.972614885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:13.975958 kubelet[2596]: E0117 00:44:13.973077 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:13.975958 kubelet[2596]: E0117 00:44:13.973197 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:13.975958 kubelet[2596]: E0117 00:44:13.973341 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6588bf47fd-nsnxs_calico-system(ae016e77-356a-4fd1-8a79-0362524f48fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:13.976254 kubelet[2596]: E0117 00:44:13.973410 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6588bf47fd-nsnxs" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" Jan 17 00:44:14.050883 systemd[1]: run-netns-cni\x2d7df363e4\x2d578f\x2d7408\x2d69d6\x2dc9da70ec1949.mount: Deactivated successfully. Jan 17 00:44:14.178341 systemd-networkd[1373]: calie5a7c86d121: Link UP Jan 17 00:44:14.184173 systemd-networkd[1373]: calie5a7c86d121: Gained carrier Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.018 [INFO][5027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0 calico-apiserver-76d788f98c- calico-apiserver 09a01101-a646-4d50-93a3-7a41aecfea23 1107 0 2026-01-17 00:42:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d788f98c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76d788f98c-msd48 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5a7c86d121 [] [] }} ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.018 [INFO][5027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.074 [INFO][5046] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" HandleID="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.074 [INFO][5046] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" HandleID="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76d788f98c-msd48", "timestamp":"2026-01-17 00:44:14.073813763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.074 [INFO][5046] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.074 [INFO][5046] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.074 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.092 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.109 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.126 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.130 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.134 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.134 [INFO][5046] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.138 [INFO][5046] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.152 [INFO][5046] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.170 [INFO][5046] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.170 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" host="localhost" Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.170 [INFO][5046] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:14.216181 containerd[1476]: 2026-01-17 00:44:14.170 [INFO][5046] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" HandleID="k8s-pod-network.c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:14.216963 containerd[1476]: 2026-01-17 00:44:14.175 [INFO][5027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a01101-a646-4d50-93a3-7a41aecfea23", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76d788f98c-msd48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5a7c86d121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:14.216963 containerd[1476]: 2026-01-17 00:44:14.175 [INFO][5027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:14.216963 containerd[1476]: 2026-01-17 00:44:14.175 [INFO][5027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5a7c86d121 ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:14.216963 containerd[1476]: 2026-01-17 00:44:14.183 [INFO][5027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:14.216963 containerd[1476]: 2026-01-17 00:44:14.184 [INFO][5027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a01101-a646-4d50-93a3-7a41aecfea23", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b", Pod:"calico-apiserver-76d788f98c-msd48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5a7c86d121", MAC:"b6:75:d4:a9:0d:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:14.216963 containerd[1476]: 2026-01-17 00:44:14.209 [INFO][5027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-msd48" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:14.298238 containerd[1476]: time="2026-01-17T00:44:14.297864253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:14.299040 containerd[1476]: time="2026-01-17T00:44:14.298204470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:14.299040 containerd[1476]: time="2026-01-17T00:44:14.298255996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:14.299040 containerd[1476]: time="2026-01-17T00:44:14.298413992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:14.301142 kubelet[2596]: E0117 00:44:14.300809 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6588bf47fd-nsnxs" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" Jan 17 00:44:14.308385 kubelet[2596]: E0117 00:44:14.304386 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:14.344436 systemd[1]: Started cri-containerd-c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b.scope - libcontainer container c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b. Jan 17 00:44:14.396824 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:14.459600 containerd[1476]: time="2026-01-17T00:44:14.459554556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-msd48,Uid:09a01101-a646-4d50-93a3-7a41aecfea23,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b\"" Jan 17 00:44:14.463708 containerd[1476]: time="2026-01-17T00:44:14.463656847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:14.531578 containerd[1476]: time="2026-01-17T00:44:14.531457954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:14.535822 containerd[1476]: time="2026-01-17T00:44:14.535650152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:14.535822 containerd[1476]: time="2026-01-17T00:44:14.535747585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:14.536154 kubelet[2596]: E0117 00:44:14.536070 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:14.536280 kubelet[2596]: E0117 00:44:14.536170 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:14.536280 kubelet[2596]: E0117 00:44:14.536257 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-msd48_calico-apiserver(09a01101-a646-4d50-93a3-7a41aecfea23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:14.536376 kubelet[2596]: E0117 00:44:14.536338 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:44:14.645163 containerd[1476]: time="2026-01-17T00:44:14.642258289Z" level=info msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" Jan 17 00:44:14.645163 containerd[1476]: time="2026-01-17T00:44:14.642258542Z" level=info msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" Jan 17 00:44:14.809319 kubelet[2596]: I0117 00:44:14.808824 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fk5gs" podStartSLOduration=103.808801007 podStartE2EDuration="1m43.808801007s" podCreationTimestamp="2026-01-17 00:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:44:14.378216442 +0000 UTC m=+108.093708799" watchObservedRunningTime="2026-01-17 00:44:14.808801007 +0000 UTC m=+108.524293362" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.847 [INFO][5129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.848 [INFO][5129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" iface="eth0" netns="/var/run/netns/cni-43285c47-2a36-6fac-868d-4ab6fe937b68" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.848 [INFO][5129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" iface="eth0" netns="/var/run/netns/cni-43285c47-2a36-6fac-868d-4ab6fe937b68" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.850 [INFO][5129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" iface="eth0" netns="/var/run/netns/cni-43285c47-2a36-6fac-868d-4ab6fe937b68" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.850 [INFO][5129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.850 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.906 [INFO][5152] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.906 [INFO][5152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.906 [INFO][5152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.926 [WARNING][5152] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.926 [INFO][5152] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.931 [INFO][5152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:14.942169 containerd[1476]: 2026-01-17 00:44:14.936 [INFO][5129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:14.942169 containerd[1476]: time="2026-01-17T00:44:14.942079927Z" level=info msg="TearDown network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" successfully" Jan 17 00:44:14.942169 containerd[1476]: time="2026-01-17T00:44:14.942150770Z" level=info msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" returns successfully" Jan 17 00:44:14.949686 systemd[1]: run-netns-cni\x2d43285c47\x2d2a36\x2d6fac\x2d868d\x2d4ab6fe937b68.mount: Deactivated successfully. Jan 17 00:44:14.960529 containerd[1476]: time="2026-01-17T00:44:14.958772117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n22c9,Uid:ec63a8db-6e49-4fec-8b7a-9f9042c1bf91,Namespace:calico-system,Attempt:1,}" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.810 [INFO][5130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.812 [INFO][5130] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" iface="eth0" netns="/var/run/netns/cni-83a1c727-e100-2111-4b97-30fea95f9084" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.813 [INFO][5130] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" iface="eth0" netns="/var/run/netns/cni-83a1c727-e100-2111-4b97-30fea95f9084" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.814 [INFO][5130] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" iface="eth0" netns="/var/run/netns/cni-83a1c727-e100-2111-4b97-30fea95f9084" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.814 [INFO][5130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.814 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.914 [INFO][5146] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.915 [INFO][5146] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.931 [INFO][5146] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.941 [WARNING][5146] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.941 [INFO][5146] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.950 [INFO][5146] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:14.971626 containerd[1476]: 2026-01-17 00:44:14.965 [INFO][5130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:14.974565 containerd[1476]: time="2026-01-17T00:44:14.971846786Z" level=info msg="TearDown network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" successfully" Jan 17 00:44:14.974565 containerd[1476]: time="2026-01-17T00:44:14.972014709Z" level=info msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" returns successfully" Jan 17 00:44:14.977054 systemd[1]: run-netns-cni\x2d83a1c727\x2de100\x2d2111\x2d4b97\x2d30fea95f9084.mount: Deactivated successfully. Jan 17 00:44:14.978397 containerd[1476]: time="2026-01-17T00:44:14.978339463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-gwfnp,Uid:61ae5c95-165c-41b7-b9c1-05cec94160e8,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:44:15.109148 systemd-networkd[1373]: calibcb3e4e4de2: Gained IPv6LL Jan 17 00:44:15.171381 systemd-networkd[1373]: cali3d94e4e5a56: Gained IPv6LL Jan 17 00:44:15.251018 systemd-networkd[1373]: cali7fb6b0067fd: Link UP Jan 17 00:44:15.264013 systemd-networkd[1373]: cali7fb6b0067fd: Gained carrier Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.082 [INFO][5174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0 calico-apiserver-76d788f98c- calico-apiserver 61ae5c95-165c-41b7-b9c1-05cec94160e8 1137 0 2026-01-17 00:42:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d788f98c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76d788f98c-gwfnp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7fb6b0067fd [] [] }} ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.082 [INFO][5174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.134 [INFO][5200] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" HandleID="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.135 [INFO][5200] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" HandleID="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76d788f98c-gwfnp", "timestamp":"2026-01-17 00:44:15.134399876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.135 [INFO][5200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.135 [INFO][5200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.135 [INFO][5200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.151 [INFO][5200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.174 [INFO][5200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.186 [INFO][5200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.190 [INFO][5200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.195 [INFO][5200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.195 [INFO][5200] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.198 [INFO][5200] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5 Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.207 [INFO][5200] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.226 [INFO][5200] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.226 [INFO][5200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" host="localhost" Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.227 [INFO][5200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:15.319844 containerd[1476]: 2026-01-17 00:44:15.227 [INFO][5200] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" HandleID="k8s-pod-network.7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:15.322950 containerd[1476]: 2026-01-17 00:44:15.235 [INFO][5174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ae5c95-165c-41b7-b9c1-05cec94160e8", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76d788f98c-gwfnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fb6b0067fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:15.322950 containerd[1476]: 2026-01-17 00:44:15.235 [INFO][5174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:15.322950 containerd[1476]: 2026-01-17 00:44:15.236 [INFO][5174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fb6b0067fd ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:15.322950 containerd[1476]: 2026-01-17 00:44:15.255 [INFO][5174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:15.322950 containerd[1476]: 2026-01-17 00:44:15.272 [INFO][5174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ae5c95-165c-41b7-b9c1-05cec94160e8", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5", Pod:"calico-apiserver-76d788f98c-gwfnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fb6b0067fd", MAC:"1a:07:fe:99:f2:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:15.322950 containerd[1476]: 2026-01-17 00:44:15.307 [INFO][5174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5" Namespace="calico-apiserver" Pod="calico-apiserver-76d788f98c-gwfnp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:15.336530 kubelet[2596]: E0117 00:44:15.336444 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:15.341903 kubelet[2596]: E0117 00:44:15.339824 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:44:15.345380 kubelet[2596]: E0117 00:44:15.344405 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6588bf47fd-nsnxs" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" Jan 17 00:44:15.367611 systemd-networkd[1373]: calie5a7c86d121: Gained IPv6LL Jan 17 00:44:15.425842 containerd[1476]: time="2026-01-17T00:44:15.424085710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:15.425842 containerd[1476]: time="2026-01-17T00:44:15.424244767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:15.425842 containerd[1476]: time="2026-01-17T00:44:15.424281877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:15.428719 containerd[1476]: time="2026-01-17T00:44:15.425046618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:15.515346 systemd[1]: Started cri-containerd-7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5.scope - libcontainer container 7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5. Jan 17 00:44:15.573466 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:15.575510 systemd-networkd[1373]: cali1ae8041c022: Link UP Jan 17 00:44:15.576551 systemd-networkd[1373]: cali1ae8041c022: Gained carrier Jan 17 00:44:15.644575 containerd[1476]: time="2026-01-17T00:44:15.644218829Z" level=info msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.074 [INFO][5163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--n22c9-eth0 goldmane-7c778bb748- calico-system ec63a8db-6e49-4fec-8b7a-9f9042c1bf91 1138 0 2026-01-17 00:43:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-n22c9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1ae8041c022 [] [] }} ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.075 [INFO][5163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.139 [INFO][5193] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" HandleID="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.140 [INFO][5193] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" HandleID="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000538710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-n22c9", "timestamp":"2026-01-17 00:44:15.139551102 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.141 [INFO][5193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.228 [INFO][5193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.228 [INFO][5193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.279 [INFO][5193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.347 [INFO][5193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.408 [INFO][5193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.421 [INFO][5193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.431 [INFO][5193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.433 [INFO][5193] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.463 [INFO][5193] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.516 [INFO][5193] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.552 [INFO][5193] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.553 [INFO][5193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" host="localhost" Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.553 [INFO][5193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:15.646320 containerd[1476]: 2026-01-17 00:44:15.553 [INFO][5193] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" HandleID="k8s-pod-network.4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:15.652621 containerd[1476]: 2026-01-17 00:44:15.560 [INFO][5163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--n22c9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-n22c9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ae8041c022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:15.652621 containerd[1476]: 2026-01-17 00:44:15.561 [INFO][5163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:15.652621 containerd[1476]: 2026-01-17 00:44:15.561 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ae8041c022 ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:15.652621 containerd[1476]: 2026-01-17 00:44:15.577 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:15.652621 containerd[1476]: 2026-01-17 00:44:15.577 [INFO][5163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--n22c9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc", Pod:"goldmane-7c778bb748-n22c9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ae8041c022", MAC:"3e:a8:72:04:20:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:15.652621 containerd[1476]: 2026-01-17 00:44:15.630 [INFO][5163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc" Namespace="calico-system" Pod="goldmane-7c778bb748-n22c9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:15.691956 containerd[1476]: time="2026-01-17T00:44:15.691784844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d788f98c-gwfnp,Uid:61ae5c95-165c-41b7-b9c1-05cec94160e8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5\"" Jan 17 00:44:15.696635 containerd[1476]: time="2026-01-17T00:44:15.696421562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:15.723959 containerd[1476]: time="2026-01-17T00:44:15.718463484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:15.723959 containerd[1476]: time="2026-01-17T00:44:15.718844468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:15.723959 containerd[1476]: time="2026-01-17T00:44:15.718860588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:15.723959 containerd[1476]: time="2026-01-17T00:44:15.719021960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:15.769446 systemd[1]: Started cri-containerd-4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc.scope - libcontainer container 4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc. Jan 17 00:44:15.777645 containerd[1476]: time="2026-01-17T00:44:15.777267493Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:15.779364 containerd[1476]: time="2026-01-17T00:44:15.779248190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:15.780236 kubelet[2596]: E0117 00:44:15.780158 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:15.780952 kubelet[2596]: E0117 00:44:15.780630 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:15.781404 containerd[1476]: time="2026-01-17T00:44:15.780367965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:15.781657 kubelet[2596]: E0117 00:44:15.781519 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-gwfnp_calico-apiserver(61ae5c95-165c-41b7-b9c1-05cec94160e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:15.781991 kubelet[2596]: E0117 00:44:15.781857 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:44:15.820149 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:15.879450 containerd[1476]: time="2026-01-17T00:44:15.879369653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n22c9,Uid:ec63a8db-6e49-4fec-8b7a-9f9042c1bf91,Namespace:calico-system,Attempt:1,} returns sandbox id \"4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc\"" Jan 17 00:44:15.887030 containerd[1476]: time="2026-01-17T00:44:15.886417180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.794 [INFO][5279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.794 [INFO][5279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" iface="eth0" netns="/var/run/netns/cni-45c75ed5-5db4-b866-06bd-b28420535a4b" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.795 [INFO][5279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" iface="eth0" netns="/var/run/netns/cni-45c75ed5-5db4-b866-06bd-b28420535a4b" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.798 [INFO][5279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" iface="eth0" netns="/var/run/netns/cni-45c75ed5-5db4-b866-06bd-b28420535a4b" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.802 [INFO][5279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.802 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.849 [INFO][5326] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.852 [INFO][5326] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.852 [INFO][5326] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.885 [WARNING][5326] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.885 [INFO][5326] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.895 [INFO][5326] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:15.913261 containerd[1476]: 2026-01-17 00:44:15.899 [INFO][5279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:15.913261 containerd[1476]: time="2026-01-17T00:44:15.904766503Z" level=info msg="TearDown network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" successfully" Jan 17 00:44:15.913261 containerd[1476]: time="2026-01-17T00:44:15.904806057Z" level=info msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" returns successfully" Jan 17 00:44:15.913261 containerd[1476]: time="2026-01-17T00:44:15.912320508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674c9b8465-rpks6,Uid:084547cb-aa8f-42ba-b949-f26ba954f5f8,Namespace:calico-system,Attempt:1,}" Jan 17 00:44:15.959476 containerd[1476]: time="2026-01-17T00:44:15.959312143Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:15.966216 containerd[1476]: time="2026-01-17T00:44:15.965554120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:44:15.966216 containerd[1476]: time="2026-01-17T00:44:15.965664065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:15.966366 kubelet[2596]: E0117 00:44:15.966033 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:15.966366 kubelet[2596]: E0117 00:44:15.966158 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:15.966366 kubelet[2596]: E0117 00:44:15.966294 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n22c9_calico-system(ec63a8db-6e49-4fec-8b7a-9f9042c1bf91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:15.966366 kubelet[2596]: E0117 00:44:15.966345 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:44:16.058720 systemd[1]: run-netns-cni\x2d45c75ed5\x2d5db4\x2db866\x2d06bd\x2db28420535a4b.mount: Deactivated successfully. Jan 17 00:44:16.271480 systemd-networkd[1373]: cali2ccbe8b2286: Link UP Jan 17 00:44:16.271735 systemd-networkd[1373]: cali2ccbe8b2286: Gained carrier Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.070 [INFO][5339] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0 calico-kube-controllers-674c9b8465- calico-system 084547cb-aa8f-42ba-b949-f26ba954f5f8 1169 0 2026-01-17 00:43:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:674c9b8465 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-674c9b8465-rpks6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2ccbe8b2286 [] [] }} ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.070 [INFO][5339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.139 [INFO][5351] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" HandleID="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.140 [INFO][5351] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" HandleID="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-674c9b8465-rpks6", "timestamp":"2026-01-17 00:44:16.139764324 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.141 [INFO][5351] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.141 [INFO][5351] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.142 [INFO][5351] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.164 [INFO][5351] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.183 [INFO][5351] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.196 [INFO][5351] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.209 [INFO][5351] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.215 [INFO][5351] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.215 [INFO][5351] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.221 [INFO][5351] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031 Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.234 [INFO][5351] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.250 [INFO][5351] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.250 [INFO][5351] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" host="localhost" Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.250 [INFO][5351] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:16.307240 containerd[1476]: 2026-01-17 00:44:16.250 [INFO][5351] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" HandleID="k8s-pod-network.1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:16.308873 containerd[1476]: 2026-01-17 00:44:16.254 [INFO][5339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0", GenerateName:"calico-kube-controllers-674c9b8465-", Namespace:"calico-system", SelfLink:"", UID:"084547cb-aa8f-42ba-b949-f26ba954f5f8", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674c9b8465", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-674c9b8465-rpks6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ccbe8b2286", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:16.308873 containerd[1476]: 2026-01-17 00:44:16.255 [INFO][5339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:16.308873 containerd[1476]: 2026-01-17 00:44:16.255 [INFO][5339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ccbe8b2286 ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:16.308873 containerd[1476]: 2026-01-17 00:44:16.269 [INFO][5339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:16.308873 containerd[1476]: 2026-01-17 00:44:16.270 [INFO][5339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0", GenerateName:"calico-kube-controllers-674c9b8465-", Namespace:"calico-system", SelfLink:"", UID:"084547cb-aa8f-42ba-b949-f26ba954f5f8", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674c9b8465", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031", Pod:"calico-kube-controllers-674c9b8465-rpks6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ccbe8b2286", MAC:"b6:ec:74:90:33:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:16.308873 containerd[1476]: 2026-01-17 00:44:16.292 [INFO][5339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031" Namespace="calico-system" Pod="calico-kube-controllers-674c9b8465-rpks6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:16.347833 kubelet[2596]: E0117 00:44:16.347283 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:44:16.355063 containerd[1476]: time="2026-01-17T00:44:16.353250265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:16.355063 containerd[1476]: time="2026-01-17T00:44:16.354349503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:16.355063 containerd[1476]: time="2026-01-17T00:44:16.354378717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:16.355063 containerd[1476]: time="2026-01-17T00:44:16.354561679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:16.359033 kubelet[2596]: E0117 00:44:16.358840 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:16.362205 kubelet[2596]: E0117 00:44:16.361878 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:44:16.370011 kubelet[2596]: E0117 00:44:16.369856 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:44:16.438299 systemd[1]: Started cri-containerd-1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031.scope - libcontainer container 1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031. Jan 17 00:44:16.477176 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:16.625339 containerd[1476]: time="2026-01-17T00:44:16.623544394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674c9b8465-rpks6,Uid:084547cb-aa8f-42ba-b949-f26ba954f5f8,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031\"" Jan 17 00:44:16.634043 containerd[1476]: time="2026-01-17T00:44:16.633238392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:44:16.647640 containerd[1476]: time="2026-01-17T00:44:16.647253675Z" level=info msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" Jan 17 00:44:16.648526 containerd[1476]: time="2026-01-17T00:44:16.648167295Z" level=info msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" Jan 17 00:44:16.718729 containerd[1476]: time="2026-01-17T00:44:16.718670320Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:16.723417 containerd[1476]: time="2026-01-17T00:44:16.723292241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:16.723570 containerd[1476]: time="2026-01-17T00:44:16.723418387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:44:16.724784 kubelet[2596]: E0117 00:44:16.723872 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:16.724867 kubelet[2596]: E0117 00:44:16.724817 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:16.725055 kubelet[2596]: E0117 00:44:16.724992 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-674c9b8465-rpks6_calico-system(084547cb-aa8f-42ba-b949-f26ba954f5f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:16.725197 kubelet[2596]: E0117 00:44:16.725073 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:44:16.839347 systemd-networkd[1373]: cali1ae8041c022: Gained IPv6LL Jan 17 00:44:16.899712 systemd-networkd[1373]: cali7fb6b0067fd: Gained IPv6LL Jan 17 00:44:16.965179 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:42876.service - OpenSSH per-connection server daemon (10.0.0.1:42876). Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.790 [INFO][5437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.792 [INFO][5437] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" iface="eth0" netns="/var/run/netns/cni-2b8134fd-18ab-4b15-7ceb-ef5c8dea0b8e" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.792 [INFO][5437] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" iface="eth0" netns="/var/run/netns/cni-2b8134fd-18ab-4b15-7ceb-ef5c8dea0b8e" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.793 [INFO][5437] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" iface="eth0" netns="/var/run/netns/cni-2b8134fd-18ab-4b15-7ceb-ef5c8dea0b8e" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.793 [INFO][5437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.793 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.904 [INFO][5451] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.905 [INFO][5451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.905 [INFO][5451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.940 [WARNING][5451] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.940 [INFO][5451] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.947 [INFO][5451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:16.976868 containerd[1476]: 2026-01-17 00:44:16.970 [INFO][5437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:16.976868 containerd[1476]: time="2026-01-17T00:44:16.979649881Z" level=info msg="TearDown network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" successfully" Jan 17 00:44:16.976868 containerd[1476]: time="2026-01-17T00:44:16.979690747Z" level=info msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" returns successfully" Jan 17 00:44:16.988453 systemd[1]: run-netns-cni\x2d2b8134fd\x2d18ab\x2d4b15\x2d7ceb\x2def5c8dea0b8e.mount: Deactivated successfully. Jan 17 00:44:17.000413 containerd[1476]: time="2026-01-17T00:44:17.000302110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jdngt,Uid:fa61c0c6-a39e-4c93-94a9-44f82847e39a,Namespace:calico-system,Attempt:1,}" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.800 [INFO][5435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.801 [INFO][5435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" iface="eth0" netns="/var/run/netns/cni-632866fe-2074-c7f2-bd27-0304052116f2" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.802 [INFO][5435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" iface="eth0" netns="/var/run/netns/cni-632866fe-2074-c7f2-bd27-0304052116f2" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.807 [INFO][5435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" iface="eth0" netns="/var/run/netns/cni-632866fe-2074-c7f2-bd27-0304052116f2" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.807 [INFO][5435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.807 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.913 [INFO][5457] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.913 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.948 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.987 [WARNING][5457] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.987 [INFO][5457] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:16.998 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:17.016876 containerd[1476]: 2026-01-17 00:44:17.011 [INFO][5435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:17.017745 containerd[1476]: time="2026-01-17T00:44:17.017319013Z" level=info msg="TearDown network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" successfully" Jan 17 00:44:17.017745 containerd[1476]: time="2026-01-17T00:44:17.017350071Z" level=info msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" returns successfully" Jan 17 00:44:17.024391 kubelet[2596]: E0117 00:44:17.024353 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:17.025616 systemd[1]: run-netns-cni\x2d632866fe\x2d2074\x2dc7f2\x2dbd27\x2d0304052116f2.mount: Deactivated successfully. Jan 17 00:44:17.027743 containerd[1476]: time="2026-01-17T00:44:17.027334599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j7s62,Uid:1abfdd34-176a-4bd5-8495-196edf2ca012,Namespace:kube-system,Attempt:1,}" Jan 17 00:44:17.232248 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 42876 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:17.261757 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:17.287518 systemd-logind[1459]: New session 9 of user core. Jan 17 00:44:17.298442 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:44:17.357357 systemd-networkd[1373]: cali2ccbe8b2286: Gained IPv6LL Jan 17 00:44:17.385425 kubelet[2596]: E0117 00:44:17.384384 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:17.390458 kubelet[2596]: E0117 00:44:17.387838 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:44:17.391442 kubelet[2596]: E0117 00:44:17.390410 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:44:17.397081 kubelet[2596]: E0117 00:44:17.396729 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:44:17.771422 systemd-networkd[1373]: cali4d9070d2051: Link UP Jan 17 00:44:17.774435 systemd-networkd[1373]: cali4d9070d2051: Gained carrier Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.302 [INFO][5472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jdngt-eth0 csi-node-driver- calico-system fa61c0c6-a39e-4c93-94a9-44f82847e39a 1201 0 2026-01-17 00:43:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jdngt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4d9070d2051 [] [] }} ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.303 [INFO][5472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.425 [INFO][5506] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" HandleID="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.426 [INFO][5506] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" HandleID="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jdngt", "timestamp":"2026-01-17 00:44:17.425499209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.426 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.426 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.426 [INFO][5506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.474 [INFO][5506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.531 [INFO][5506] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.568 [INFO][5506] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.641 [INFO][5506] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.653 [INFO][5506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.653 [INFO][5506] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.670 [INFO][5506] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58 Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.697 [INFO][5506] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.737 [INFO][5506] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.738 [INFO][5506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" host="localhost" Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.738 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:17.828296 containerd[1476]: 2026-01-17 00:44:17.740 [INFO][5506] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" HandleID="k8s-pod-network.eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:17.829701 containerd[1476]: 2026-01-17 00:44:17.752 [INFO][5472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jdngt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa61c0c6-a39e-4c93-94a9-44f82847e39a", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jdngt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d9070d2051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:17.829701 containerd[1476]: 2026-01-17 00:44:17.752 [INFO][5472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:17.829701 containerd[1476]: 2026-01-17 00:44:17.753 [INFO][5472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d9070d2051 ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:17.829701 containerd[1476]: 2026-01-17 00:44:17.774 [INFO][5472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:17.829701 containerd[1476]: 2026-01-17 00:44:17.775 [INFO][5472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jdngt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa61c0c6-a39e-4c93-94a9-44f82847e39a", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58", Pod:"csi-node-driver-jdngt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d9070d2051", MAC:"5e:46:b2:47:ae:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:17.829701 containerd[1476]: 2026-01-17 00:44:17.818 [INFO][5472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58" Namespace="calico-system" Pod="csi-node-driver-jdngt" WorkloadEndpoint="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:17.848761 sshd[5469]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:17.869497 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:42876.service: Deactivated successfully. Jan 17 00:44:17.889819 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:44:17.895711 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:44:17.910915 systemd-logind[1459]: Removed session 9. Jan 17 00:44:17.973344 containerd[1476]: time="2026-01-17T00:44:17.970899807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:17.973344 containerd[1476]: time="2026-01-17T00:44:17.971031505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:17.973344 containerd[1476]: time="2026-01-17T00:44:17.971055779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:17.990805 containerd[1476]: time="2026-01-17T00:44:17.988742074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:18.022623 systemd-networkd[1373]: calie54942141b0: Link UP Jan 17 00:44:18.025140 systemd-networkd[1373]: calie54942141b0: Gained carrier Jan 17 00:44:18.112891 systemd[1]: Started cri-containerd-eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58.scope - libcontainer container eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58. Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.335 [INFO][5484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--j7s62-eth0 coredns-66bc5c9577- kube-system 1abfdd34-176a-4bd5-8495-196edf2ca012 1202 0 2026-01-17 00:42:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-j7s62 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie54942141b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.335 [INFO][5484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.505 [INFO][5516] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" HandleID="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.506 [INFO][5516] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" HandleID="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00020ee00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-j7s62", "timestamp":"2026-01-17 00:44:17.505817353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.506 [INFO][5516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.738 [INFO][5516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.738 [INFO][5516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.794 [INFO][5516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.839 [INFO][5516] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.903 [INFO][5516] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.932 [INFO][5516] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.939 [INFO][5516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.939 [INFO][5516] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.951 [INFO][5516] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37 Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:17.979 [INFO][5516] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:18.007 [INFO][5516] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:18.009 [INFO][5516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" host="localhost" Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:18.009 [INFO][5516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:18.128322 containerd[1476]: 2026-01-17 00:44:18.009 [INFO][5516] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" HandleID="k8s-pod-network.3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:18.132350 containerd[1476]: 2026-01-17 00:44:18.015 [INFO][5484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j7s62-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1abfdd34-176a-4bd5-8495-196edf2ca012", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-j7s62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie54942141b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:18.132350 containerd[1476]: 2026-01-17 00:44:18.016 [INFO][5484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:18.132350 containerd[1476]: 2026-01-17 00:44:18.016 [INFO][5484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie54942141b0 ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:18.132350 containerd[1476]: 2026-01-17 00:44:18.026 [INFO][5484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:18.132350 containerd[1476]: 2026-01-17 00:44:18.026 [INFO][5484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j7s62-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1abfdd34-176a-4bd5-8495-196edf2ca012", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37", Pod:"coredns-66bc5c9577-j7s62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie54942141b0", MAC:"72:e6:3f:65:54:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:18.132350 containerd[1476]: 2026-01-17 00:44:18.106 [INFO][5484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37" Namespace="kube-system" Pod="coredns-66bc5c9577-j7s62" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:18.152384 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:18.242885 containerd[1476]: time="2026-01-17T00:44:18.234159542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:18.242885 containerd[1476]: time="2026-01-17T00:44:18.234485201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:18.242885 containerd[1476]: time="2026-01-17T00:44:18.234505971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:18.251586 containerd[1476]: time="2026-01-17T00:44:18.245973997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:18.273620 containerd[1476]: time="2026-01-17T00:44:18.273469180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jdngt,Uid:fa61c0c6-a39e-4c93-94a9-44f82847e39a,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58\"" Jan 17 00:44:18.293783 containerd[1476]: time="2026-01-17T00:44:18.293374672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:44:18.374669 systemd[1]: Started cri-containerd-3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37.scope - libcontainer container 3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37. Jan 17 00:44:18.413258 kubelet[2596]: E0117 00:44:18.412610 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:44:18.418614 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:18.432351 containerd[1476]: time="2026-01-17T00:44:18.432291122Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:18.434622 containerd[1476]: time="2026-01-17T00:44:18.434578243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:44:18.434912 containerd[1476]: time="2026-01-17T00:44:18.434868446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:44:18.435259 kubelet[2596]: E0117 00:44:18.435216 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:18.435412 kubelet[2596]: E0117 00:44:18.435389 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:18.436016 kubelet[2596]: E0117 00:44:18.435832 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:18.437760 containerd[1476]: time="2026-01-17T00:44:18.437637700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:44:18.540268 containerd[1476]: time="2026-01-17T00:44:18.540180999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j7s62,Uid:1abfdd34-176a-4bd5-8495-196edf2ca012,Namespace:kube-system,Attempt:1,} returns sandbox id \"3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37\"" Jan 17 00:44:18.545062 containerd[1476]: time="2026-01-17T00:44:18.544724024Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:18.545728 kubelet[2596]: E0117 00:44:18.545062 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:18.551574 containerd[1476]: time="2026-01-17T00:44:18.551266982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:44:18.551574 containerd[1476]: time="2026-01-17T00:44:18.551486653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:44:18.551779 kubelet[2596]: E0117 00:44:18.551712 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:18.551779 kubelet[2596]: E0117 00:44:18.551768 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:18.551879 kubelet[2596]: E0117 00:44:18.551849 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:18.552030 kubelet[2596]: E0117 00:44:18.551900 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:44:18.587177 containerd[1476]: time="2026-01-17T00:44:18.583793199Z" level=info msg="CreateContainer within sandbox \"3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:44:18.667137 containerd[1476]: time="2026-01-17T00:44:18.666517221Z" level=info msg="CreateContainer within sandbox \"3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11196ade5c97240a79c4702111dcff534b697cd8189409571a4a6fa5f62dbc01\"" Jan 17 00:44:18.669167 containerd[1476]: time="2026-01-17T00:44:18.667969189Z" level=info msg="StartContainer for \"11196ade5c97240a79c4702111dcff534b697cd8189409571a4a6fa5f62dbc01\"" Jan 17 00:44:18.814310 systemd[1]: Started cri-containerd-11196ade5c97240a79c4702111dcff534b697cd8189409571a4a6fa5f62dbc01.scope - libcontainer container 11196ade5c97240a79c4702111dcff534b697cd8189409571a4a6fa5f62dbc01. Jan 17 00:44:18.931028 containerd[1476]: time="2026-01-17T00:44:18.930902178Z" level=info msg="StartContainer for \"11196ade5c97240a79c4702111dcff534b697cd8189409571a4a6fa5f62dbc01\" returns successfully" Jan 17 00:44:19.093355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181646000.mount: Deactivated successfully. Jan 17 00:44:19.421266 kubelet[2596]: E0117 00:44:19.419909 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:19.426831 kubelet[2596]: E0117 00:44:19.425572 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:44:19.595895 systemd-networkd[1373]: calie54942141b0: Gained IPv6LL Jan 17 00:44:19.650031 kubelet[2596]: I0117 00:44:19.648928 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j7s62" podStartSLOduration=108.648903776 podStartE2EDuration="1m48.648903776s" podCreationTimestamp="2026-01-17 00:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:44:19.592525279 +0000 UTC m=+113.308017656" watchObservedRunningTime="2026-01-17 00:44:19.648903776 +0000 UTC m=+113.364396132" Jan 17 00:44:19.717836 systemd-networkd[1373]: cali4d9070d2051: Gained IPv6LL Jan 17 00:44:20.442055 kubelet[2596]: E0117 00:44:20.442004 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:21.139412 containerd[1476]: time="2026-01-17T00:44:21.139324062Z" level=info msg="StopPodSandbox for \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\"" Jan 17 00:44:21.161745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5-shm.mount: Deactivated successfully. Jan 17 00:44:21.270423 systemd[1]: cri-containerd-c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5.scope: Deactivated successfully. Jan 17 00:44:21.397578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5-rootfs.mount: Deactivated successfully. Jan 17 00:44:21.441063 kubelet[2596]: E0117 00:44:21.439908 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:21.490547 containerd[1476]: time="2026-01-17T00:44:21.487529258Z" level=info msg="shim disconnected" id=c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5 namespace=k8s.io Jan 17 00:44:21.498416 containerd[1476]: time="2026-01-17T00:44:21.497902906Z" level=warning msg="cleaning up after shim disconnected" id=c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5 namespace=k8s.io Jan 17 00:44:21.498416 containerd[1476]: time="2026-01-17T00:44:21.498053228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:44:21.910309 systemd-networkd[1373]: cali3d94e4e5a56: Link DOWN Jan 17 00:44:21.910328 systemd-networkd[1373]: cali3d94e4e5a56: Lost carrier Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:21.880 [INFO][5725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:21.880 [INFO][5725] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" iface="eth0" netns="/var/run/netns/cni-536d13db-e48a-d41a-8279-05ef56453082" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:21.881 [INFO][5725] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" iface="eth0" netns="/var/run/netns/cni-536d13db-e48a-d41a-8279-05ef56453082" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:21.971 [INFO][5725] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" after=90.467458ms iface="eth0" netns="/var/run/netns/cni-536d13db-e48a-d41a-8279-05ef56453082" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:21.971 [INFO][5725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:21.971 [INFO][5725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:22.077 [INFO][5734] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:22.082 [INFO][5734] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:22.083 [INFO][5734] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:22.266 [INFO][5734] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:22.266 [INFO][5734] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:22.283 [INFO][5734] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:22.323401 containerd[1476]: 2026-01-17 00:44:22.294 [INFO][5725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:22.331296 containerd[1476]: time="2026-01-17T00:44:22.331218786Z" level=info msg="TearDown network for sandbox \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\" successfully" Jan 17 00:44:22.331296 containerd[1476]: time="2026-01-17T00:44:22.331273638Z" level=info msg="StopPodSandbox for \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\" returns successfully" Jan 17 00:44:22.350476 containerd[1476]: time="2026-01-17T00:44:22.349593647Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:44:22.357587 systemd[1]: run-netns-cni\x2d536d13db\x2de48a\x2dd41a\x2d8279\x2d05ef56453082.mount: Deactivated successfully. Jan 17 00:44:22.465908 kubelet[2596]: I0117 00:44:22.465670 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.617 [WARNING][5761] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6588bf47fd--nsnxs-eth0", GenerateName:"whisker-6588bf47fd-", Namespace:"calico-system", SelfLink:"", UID:"ae016e77-356a-4fd1-8a79-0362524f48fd", ResourceVersion:"1282", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6588bf47fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5", Pod:"whisker-6588bf47fd-nsnxs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d94e4e5a56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.621 [INFO][5761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.626 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" iface="eth0" netns="" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.626 [INFO][5761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.626 [INFO][5761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.735 [INFO][5769] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.737 [INFO][5769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.740 [INFO][5769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.768 [WARNING][5769] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.768 [INFO][5769] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.774 [INFO][5769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:22.791486 containerd[1476]: 2026-01-17 00:44:22.783 [INFO][5761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:22.792924 containerd[1476]: time="2026-01-17T00:44:22.792288134Z" level=info msg="TearDown network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" successfully" Jan 17 00:44:22.792924 containerd[1476]: time="2026-01-17T00:44:22.792326636Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" returns successfully" Jan 17 00:44:22.859542 kubelet[2596]: I0117 00:44:22.855360 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-ca-bundle\") pod \"ae016e77-356a-4fd1-8a79-0362524f48fd\" (UID: \"ae016e77-356a-4fd1-8a79-0362524f48fd\") " Jan 17 00:44:22.859542 kubelet[2596]: I0117 00:44:22.855419 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-backend-key-pair\") pod \"ae016e77-356a-4fd1-8a79-0362524f48fd\" (UID: \"ae016e77-356a-4fd1-8a79-0362524f48fd\") " Jan 17 00:44:22.859542 kubelet[2596]: I0117 00:44:22.855448 2596 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-976xk\" (UniqueName: \"kubernetes.io/projected/ae016e77-356a-4fd1-8a79-0362524f48fd-kube-api-access-976xk\") pod \"ae016e77-356a-4fd1-8a79-0362524f48fd\" (UID: \"ae016e77-356a-4fd1-8a79-0362524f48fd\") " Jan 17 00:44:22.866374 kubelet[2596]: I0117 00:44:22.866326 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ae016e77-356a-4fd1-8a79-0362524f48fd" (UID: "ae016e77-356a-4fd1-8a79-0362524f48fd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:44:22.875433 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:36518.service - OpenSSH per-connection server daemon (10.0.0.1:36518). Jan 17 00:44:22.877680 kubelet[2596]: I0117 00:44:22.876506 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ae016e77-356a-4fd1-8a79-0362524f48fd" (UID: "ae016e77-356a-4fd1-8a79-0362524f48fd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:44:22.877680 kubelet[2596]: I0117 00:44:22.876716 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae016e77-356a-4fd1-8a79-0362524f48fd-kube-api-access-976xk" (OuterVolumeSpecName: "kube-api-access-976xk") pod "ae016e77-356a-4fd1-8a79-0362524f48fd" (UID: "ae016e77-356a-4fd1-8a79-0362524f48fd"). InnerVolumeSpecName "kube-api-access-976xk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:44:22.885849 systemd[1]: var-lib-kubelet-pods-ae016e77\x2d356a\x2d4fd1\x2d8a79\x2d0362524f48fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d976xk.mount: Deactivated successfully. Jan 17 00:44:22.891203 systemd[1]: var-lib-kubelet-pods-ae016e77\x2d356a\x2d4fd1\x2d8a79\x2d0362524f48fd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:44:22.958144 kubelet[2596]: I0117 00:44:22.957298 2596 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 17 00:44:22.958144 kubelet[2596]: I0117 00:44:22.957341 2596 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae016e77-356a-4fd1-8a79-0362524f48fd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 17 00:44:22.958144 kubelet[2596]: I0117 00:44:22.957355 2596 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-976xk\" (UniqueName: \"kubernetes.io/projected/ae016e77-356a-4fd1-8a79-0362524f48fd-kube-api-access-976xk\") on node \"localhost\" DevicePath \"\"" Jan 17 00:44:23.040030 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 36518 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:23.046691 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:23.082343 systemd-logind[1459]: New session 10 of user core. Jan 17 00:44:23.091222 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:44:23.495480 sshd[5779]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:23.500795 systemd[1]: Removed slice kubepods-besteffort-podae016e77_356a_4fd1_8a79_0362524f48fd.slice - libcontainer container kubepods-besteffort-podae016e77_356a_4fd1_8a79_0362524f48fd.slice. Jan 17 00:44:23.505402 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:36518.service: Deactivated successfully. Jan 17 00:44:23.508889 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:44:23.513720 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:44:23.516657 systemd-logind[1459]: Removed session 10. Jan 17 00:44:23.722606 systemd[1]: Created slice kubepods-besteffort-podee6e145a_aa21_42ce_80af_75c3ba3e223d.slice - libcontainer container kubepods-besteffort-podee6e145a_aa21_42ce_80af_75c3ba3e223d.slice. Jan 17 00:44:23.773342 kubelet[2596]: I0117 00:44:23.773198 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee6e145a-aa21-42ce-80af-75c3ba3e223d-whisker-ca-bundle\") pod \"whisker-688bc4c644-qrndd\" (UID: \"ee6e145a-aa21-42ce-80af-75c3ba3e223d\") " pod="calico-system/whisker-688bc4c644-qrndd" Jan 17 00:44:23.783025 kubelet[2596]: I0117 00:44:23.778546 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52j8r\" (UniqueName: \"kubernetes.io/projected/ee6e145a-aa21-42ce-80af-75c3ba3e223d-kube-api-access-52j8r\") pod \"whisker-688bc4c644-qrndd\" (UID: \"ee6e145a-aa21-42ce-80af-75c3ba3e223d\") " pod="calico-system/whisker-688bc4c644-qrndd" Jan 17 00:44:23.783025 kubelet[2596]: I0117 00:44:23.778633 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee6e145a-aa21-42ce-80af-75c3ba3e223d-whisker-backend-key-pair\") pod \"whisker-688bc4c644-qrndd\" (UID: \"ee6e145a-aa21-42ce-80af-75c3ba3e223d\") " pod="calico-system/whisker-688bc4c644-qrndd" Jan 17 00:44:24.066143 containerd[1476]: time="2026-01-17T00:44:24.055818960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-688bc4c644-qrndd,Uid:ee6e145a-aa21-42ce-80af-75c3ba3e223d,Namespace:calico-system,Attempt:0,}" Jan 17 00:44:24.613175 systemd-networkd[1373]: cali2743234b2b3: Link UP Jan 17 00:44:24.619392 systemd-networkd[1373]: cali2743234b2b3: Gained carrier Jan 17 00:44:24.644207 kubelet[2596]: I0117 00:44:24.644016 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae016e77-356a-4fd1-8a79-0362524f48fd" path="/var/lib/kubelet/pods/ae016e77-356a-4fd1-8a79-0362524f48fd/volumes" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.238 [INFO][5798] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--688bc4c644--qrndd-eth0 whisker-688bc4c644- calico-system ee6e145a-aa21-42ce-80af-75c3ba3e223d 1312 0 2026-01-17 00:44:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:688bc4c644 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-688bc4c644-qrndd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2743234b2b3 [] [] }} ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.239 [INFO][5798] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-eth0" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.348 [INFO][5811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" HandleID="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Workload="localhost-k8s-whisker--688bc4c644--qrndd-eth0" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.348 [INFO][5811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" HandleID="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Workload="localhost-k8s-whisker--688bc4c644--qrndd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-688bc4c644-qrndd", "timestamp":"2026-01-17 00:44:24.34805782 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.349 [INFO][5811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.349 [INFO][5811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.349 [INFO][5811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.387 [INFO][5811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.416 [INFO][5811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.464 [INFO][5811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.477 [INFO][5811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.495 [INFO][5811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.495 [INFO][5811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.514 [INFO][5811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8 Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.527 [INFO][5811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.586 [INFO][5811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.586 [INFO][5811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" host="localhost" Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.586 [INFO][5811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:24.697880 containerd[1476]: 2026-01-17 00:44:24.586 [INFO][5811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" HandleID="k8s-pod-network.a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Workload="localhost-k8s-whisker--688bc4c644--qrndd-eth0" Jan 17 00:44:24.698771 containerd[1476]: 2026-01-17 00:44:24.598 [INFO][5798] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--688bc4c644--qrndd-eth0", GenerateName:"whisker-688bc4c644-", Namespace:"calico-system", SelfLink:"", UID:"ee6e145a-aa21-42ce-80af-75c3ba3e223d", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"688bc4c644", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-688bc4c644-qrndd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2743234b2b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:24.698771 containerd[1476]: 2026-01-17 00:44:24.602 [INFO][5798] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-eth0" Jan 17 00:44:24.698771 containerd[1476]: 2026-01-17 00:44:24.602 [INFO][5798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2743234b2b3 ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-eth0" Jan 17 00:44:24.698771 containerd[1476]: 2026-01-17 00:44:24.616 [INFO][5798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-eth0" Jan 17 00:44:24.698771 containerd[1476]: 2026-01-17 00:44:24.617 [INFO][5798] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--688bc4c644--qrndd-eth0", GenerateName:"whisker-688bc4c644-", Namespace:"calico-system", SelfLink:"", UID:"ee6e145a-aa21-42ce-80af-75c3ba3e223d", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"688bc4c644", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8", Pod:"whisker-688bc4c644-qrndd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2743234b2b3", MAC:"1e:d1:c3:07:26:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:24.698771 containerd[1476]: 2026-01-17 00:44:24.673 [INFO][5798] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8" Namespace="calico-system" Pod="whisker-688bc4c644-qrndd" WorkloadEndpoint="localhost-k8s-whisker--688bc4c644--qrndd-eth0" Jan 17 00:44:24.823812 containerd[1476]: time="2026-01-17T00:44:24.823270034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:24.823812 containerd[1476]: time="2026-01-17T00:44:24.823371494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:24.823812 containerd[1476]: time="2026-01-17T00:44:24.823390359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:24.823812 containerd[1476]: time="2026-01-17T00:44:24.823577740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:24.897177 systemd[1]: Started cri-containerd-a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8.scope - libcontainer container a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8. Jan 17 00:44:25.000200 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:44:25.100634 containerd[1476]: time="2026-01-17T00:44:25.100381584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-688bc4c644-qrndd,Uid:ee6e145a-aa21-42ce-80af-75c3ba3e223d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6a97b9ee6bcc2f2f9f28d6dbab6f186437813e50760b9b18c0faf45d13580e8\"" Jan 17 00:44:25.109387 containerd[1476]: time="2026-01-17T00:44:25.109248177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:44:25.222900 containerd[1476]: time="2026-01-17T00:44:25.222840999Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:25.230640 containerd[1476]: time="2026-01-17T00:44:25.228928010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:44:25.230640 containerd[1476]: time="2026-01-17T00:44:25.229016868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:44:25.230826 kubelet[2596]: E0117 00:44:25.229438 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:25.230826 kubelet[2596]: E0117 00:44:25.229508 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:25.230826 kubelet[2596]: E0117 00:44:25.229603 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:25.233575 containerd[1476]: time="2026-01-17T00:44:25.233306863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:44:25.328400 containerd[1476]: time="2026-01-17T00:44:25.327669140Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:25.331402 containerd[1476]: time="2026-01-17T00:44:25.331226653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:44:25.331402 containerd[1476]: time="2026-01-17T00:44:25.331338694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:25.331869 kubelet[2596]: E0117 00:44:25.331669 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:25.331869 kubelet[2596]: E0117 00:44:25.331761 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:25.332065 kubelet[2596]: E0117 00:44:25.331911 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:25.332065 kubelet[2596]: E0117 00:44:25.332020 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:44:25.536927 kubelet[2596]: E0117 00:44:25.534860 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:44:26.115666 systemd-networkd[1373]: cali2743234b2b3: Gained IPv6LL Jan 17 00:44:26.548497 kubelet[2596]: E0117 00:44:26.547775 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:44:28.524662 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:36520.service - OpenSSH per-connection server daemon (10.0.0.1:36520). Jan 17 00:44:28.684741 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 36520 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:28.694066 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:28.753544 systemd-logind[1459]: New session 11 of user core. Jan 17 00:44:28.783855 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:44:29.508566 sshd[5880]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:29.531211 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:36520.service: Deactivated successfully. Jan 17 00:44:29.535460 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:44:29.566522 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:44:29.584424 systemd-logind[1459]: Removed session 11. Jan 17 00:44:29.644425 containerd[1476]: time="2026-01-17T00:44:29.643291156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:29.702223 update_engine[1460]: I20260117 00:44:29.702049 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:44:29.702934 update_engine[1460]: I20260117 00:44:29.702899 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:44:29.712839 update_engine[1460]: I20260117 00:44:29.712802 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.715648 1460 omaha_request_params.cc:62] Current group set to lts Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.716116 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.716145 1460 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.716176 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.716377 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.716545 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.716569 1460 omaha_request_action.cc:272] Request: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: Jan 17 00:44:29.719183 update_engine[1460]: I20260117 00:44:29.716645 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:44:29.750298 update_engine[1460]: I20260117 00:44:29.744844 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:44:29.750298 update_engine[1460]: I20260117 00:44:29.745434 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:44:29.764734 containerd[1476]: time="2026-01-17T00:44:29.764511517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:29.773165 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:44:29.790981 update_engine[1460]: E20260117 00:44:29.787664 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:44:29.790981 update_engine[1460]: I20260117 00:44:29.787818 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:44:29.810681 containerd[1476]: time="2026-01-17T00:44:29.810282697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:29.810681 containerd[1476]: time="2026-01-17T00:44:29.810407933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:29.812254 kubelet[2596]: E0117 00:44:29.811322 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:29.812254 kubelet[2596]: E0117 00:44:29.811385 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:29.812254 kubelet[2596]: E0117 00:44:29.811584 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-gwfnp_calico-apiserver(61ae5c95-165c-41b7-b9c1-05cec94160e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:29.812254 kubelet[2596]: E0117 00:44:29.811645 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:44:29.816578 containerd[1476]: time="2026-01-17T00:44:29.816218966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:44:29.944994 containerd[1476]: time="2026-01-17T00:44:29.944720823Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:29.951755 containerd[1476]: time="2026-01-17T00:44:29.951567778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:44:29.951755 containerd[1476]: time="2026-01-17T00:44:29.951682103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:29.953307 kubelet[2596]: E0117 00:44:29.952251 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:29.953307 kubelet[2596]: E0117 00:44:29.952325 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:29.953307 kubelet[2596]: E0117 00:44:29.952435 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n22c9_calico-system(ec63a8db-6e49-4fec-8b7a-9f9042c1bf91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:29.953307 kubelet[2596]: E0117 00:44:29.952483 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:44:31.652741 containerd[1476]: time="2026-01-17T00:44:31.647642826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:31.778193 containerd[1476]: time="2026-01-17T00:44:31.777997112Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:31.785512 containerd[1476]: time="2026-01-17T00:44:31.785408677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:31.785681 containerd[1476]: time="2026-01-17T00:44:31.785561534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:31.787450 kubelet[2596]: E0117 00:44:31.787137 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:31.787450 kubelet[2596]: E0117 00:44:31.787208 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:31.787450 kubelet[2596]: E0117 00:44:31.787319 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-msd48_calico-apiserver(09a01101-a646-4d50-93a3-7a41aecfea23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:31.787450 kubelet[2596]: E0117 00:44:31.787366 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:44:32.649591 containerd[1476]: time="2026-01-17T00:44:32.649506004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:44:32.773828 containerd[1476]: time="2026-01-17T00:44:32.773503151Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:32.776503 containerd[1476]: time="2026-01-17T00:44:32.776301487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:44:32.777705 containerd[1476]: time="2026-01-17T00:44:32.776520639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:44:32.777705 containerd[1476]: time="2026-01-17T00:44:32.777439271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:44:32.777808 kubelet[2596]: E0117 00:44:32.776756 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:32.777808 kubelet[2596]: E0117 00:44:32.776823 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:32.780826 kubelet[2596]: E0117 00:44:32.780375 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:32.855340 containerd[1476]: time="2026-01-17T00:44:32.855273597Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:32.872872 containerd[1476]: time="2026-01-17T00:44:32.872619675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:44:32.872872 containerd[1476]: time="2026-01-17T00:44:32.872763054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:32.873217 kubelet[2596]: E0117 00:44:32.872950 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:32.873217 kubelet[2596]: E0117 00:44:32.873015 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:32.873717 kubelet[2596]: E0117 00:44:32.873328 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-674c9b8465-rpks6_calico-system(084547cb-aa8f-42ba-b949-f26ba954f5f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:32.873717 kubelet[2596]: E0117 00:44:32.873387 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:44:32.882029 containerd[1476]: time="2026-01-17T00:44:32.881489921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:44:32.986876 containerd[1476]: time="2026-01-17T00:44:32.986588626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:32.992934 containerd[1476]: time="2026-01-17T00:44:32.992709207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:44:32.993820 containerd[1476]: time="2026-01-17T00:44:32.993399453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:44:32.998199 kubelet[2596]: E0117 00:44:32.995371 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:32.998199 kubelet[2596]: E0117 00:44:32.995480 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:32.998199 kubelet[2596]: E0117 00:44:32.995593 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:32.998941 kubelet[2596]: E0117 00:44:32.995662 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:44:34.595562 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:59810.service - OpenSSH per-connection server daemon (10.0.0.1:59810). Jan 17 00:44:34.736606 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 59810 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:34.739704 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:34.753721 systemd-logind[1459]: New session 12 of user core. Jan 17 00:44:34.765402 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:44:35.308590 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:35.337738 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:59810.service: Deactivated successfully. Jan 17 00:44:35.344502 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:44:35.347052 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:44:35.373481 systemd-logind[1459]: Removed session 12. Jan 17 00:44:35.783942 kubelet[2596]: E0117 00:44:35.782311 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:37.654836 containerd[1476]: time="2026-01-17T00:44:37.652647379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:44:37.769887 containerd[1476]: time="2026-01-17T00:44:37.766911518Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:37.775187 containerd[1476]: time="2026-01-17T00:44:37.774652721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:44:37.775187 containerd[1476]: time="2026-01-17T00:44:37.774803323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:44:37.775887 kubelet[2596]: E0117 00:44:37.775736 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:37.775887 kubelet[2596]: E0117 00:44:37.775803 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:37.776931 kubelet[2596]: E0117 00:44:37.775925 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:37.787314 containerd[1476]: time="2026-01-17T00:44:37.786595025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:44:37.892206 containerd[1476]: time="2026-01-17T00:44:37.890189553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:37.898220 containerd[1476]: time="2026-01-17T00:44:37.897587170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:44:37.898220 containerd[1476]: time="2026-01-17T00:44:37.897734747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:37.898434 kubelet[2596]: E0117 00:44:37.897949 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:37.898434 kubelet[2596]: E0117 00:44:37.898051 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:37.900214 kubelet[2596]: E0117 00:44:37.899434 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:37.900214 kubelet[2596]: E0117 00:44:37.899520 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:44:39.584759 update_engine[1460]: I20260117 00:44:39.581739 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:44:39.588460 update_engine[1460]: I20260117 00:44:39.586233 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:44:39.588460 update_engine[1460]: I20260117 00:44:39.586561 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:44:39.609953 update_engine[1460]: E20260117 00:44:39.609615 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:44:39.609953 update_engine[1460]: I20260117 00:44:39.609760 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:44:40.354732 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:59824.service - OpenSSH per-connection server daemon (10.0.0.1:59824). Jan 17 00:44:40.531722 sshd[5939]: Accepted publickey for core from 10.0.0.1 port 59824 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:40.538501 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:40.583491 systemd-logind[1459]: New session 13 of user core. Jan 17 00:44:40.601745 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:44:41.089984 sshd[5939]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:41.098853 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:59824.service: Deactivated successfully. Jan 17 00:44:41.108785 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:44:41.113552 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:44:41.118012 systemd-logind[1459]: Removed session 13. Jan 17 00:44:41.643660 kubelet[2596]: E0117 00:44:41.640780 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:41.645046 kubelet[2596]: E0117 00:44:41.644915 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:44:43.646169 kubelet[2596]: E0117 00:44:43.644485 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:44:43.652468 kubelet[2596]: E0117 00:44:43.652339 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:44:44.653268 kubelet[2596]: E0117 00:44:44.651823 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:44:45.649683 kubelet[2596]: E0117 00:44:45.647512 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:44:46.142918 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:58046.service - OpenSSH per-connection server daemon (10.0.0.1:58046). Jan 17 00:44:46.344188 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 58046 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:46.352395 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:46.385364 systemd-logind[1459]: New session 14 of user core. Jan 17 00:44:46.398074 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:44:46.908723 sshd[5958]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:46.919788 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:58046.service: Deactivated successfully. Jan 17 00:44:46.922580 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:44:46.928898 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:44:46.933951 systemd-logind[1459]: Removed session 14. Jan 17 00:44:48.034958 containerd[1476]: time="2026-01-17T00:44:48.032739687Z" level=info msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.296 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fk5gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4177c8b-a26b-419d-9b18-e9e581c975bb", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e", Pod:"coredns-66bc5c9577-fk5gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcb3e4e4de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.299 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.304 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" iface="eth0" netns="" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.307 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.308 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.408 [INFO][5998] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.411 [INFO][5998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.411 [INFO][5998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.433 [WARNING][5998] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.434 [INFO][5998] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.473 [INFO][5998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:48.492785 containerd[1476]: 2026-01-17 00:44:48.486 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.493836 containerd[1476]: time="2026-01-17T00:44:48.493645860Z" level=info msg="TearDown network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" successfully" Jan 17 00:44:48.493836 containerd[1476]: time="2026-01-17T00:44:48.493687196Z" level=info msg="StopPodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" returns successfully" Jan 17 00:44:48.519991 containerd[1476]: time="2026-01-17T00:44:48.519824818Z" level=info msg="RemovePodSandbox for \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" Jan 17 00:44:48.524529 containerd[1476]: time="2026-01-17T00:44:48.523944879Z" level=info msg="Forcibly stopping sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\"" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.720 [WARNING][6014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fk5gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4177c8b-a26b-419d-9b18-e9e581c975bb", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"676c9ab955ef1889a2410fd901c3eb58438e29281bd5aa481be5f379cf4a690e", Pod:"coredns-66bc5c9577-fk5gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcb3e4e4de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.722 [INFO][6014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.722 [INFO][6014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" iface="eth0" netns="" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.722 [INFO][6014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.722 [INFO][6014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.880 [INFO][6023] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.880 [INFO][6023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.880 [INFO][6023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.916 [WARNING][6023] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.916 [INFO][6023] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" HandleID="k8s-pod-network.bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Workload="localhost-k8s-coredns--66bc5c9577--fk5gs-eth0" Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.934 [INFO][6023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:48.966404 containerd[1476]: 2026-01-17 00:44:48.942 [INFO][6014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97" Jan 17 00:44:48.966404 containerd[1476]: time="2026-01-17T00:44:48.964960003Z" level=info msg="TearDown network for sandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" successfully" Jan 17 00:44:48.994812 containerd[1476]: time="2026-01-17T00:44:48.994751042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:48.995077 containerd[1476]: time="2026-01-17T00:44:48.994854186Z" level=info msg="RemovePodSandbox \"bb5ea65586c906310331ad089a9acc29694869ca13fbc801080fad08a13e3f97\" returns successfully" Jan 17 00:44:48.999425 containerd[1476]: time="2026-01-17T00:44:48.996525513Z" level=info msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.114 [WARNING][6040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a01101-a646-4d50-93a3-7a41aecfea23", ResourceVersion:"1456", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b", Pod:"calico-apiserver-76d788f98c-msd48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5a7c86d121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.115 [INFO][6040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.115 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" iface="eth0" netns="" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.115 [INFO][6040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.115 [INFO][6040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.287 [INFO][6048] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.287 [INFO][6048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.287 [INFO][6048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.321 [WARNING][6048] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.321 [INFO][6048] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.331 [INFO][6048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:49.348637 containerd[1476]: 2026-01-17 00:44:49.338 [INFO][6040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.348637 containerd[1476]: time="2026-01-17T00:44:49.347830460Z" level=info msg="TearDown network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" successfully" Jan 17 00:44:49.348637 containerd[1476]: time="2026-01-17T00:44:49.347873781Z" level=info msg="StopPodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" returns successfully" Jan 17 00:44:49.352713 containerd[1476]: time="2026-01-17T00:44:49.352411016Z" level=info msg="RemovePodSandbox for \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" Jan 17 00:44:49.352713 containerd[1476]: time="2026-01-17T00:44:49.352464557Z" level=info msg="Forcibly stopping sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\"" Jan 17 00:44:49.578866 update_engine[1460]: I20260117 00:44:49.576387 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:44:49.578866 update_engine[1460]: I20260117 00:44:49.576815 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:44:49.584537 update_engine[1460]: I20260117 00:44:49.584458 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:44:49.604208 update_engine[1460]: E20260117 00:44:49.603888 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:44:49.604208 update_engine[1460]: I20260117 00:44:49.603991 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:44:49.650575 kubelet[2596]: E0117 00:44:49.648843 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.499 [WARNING][6066] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a01101-a646-4d50-93a3-7a41aecfea23", ResourceVersion:"1456", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0de05577a2a9adb53ae0737beeb1459431deafd908d4367d51448cc6d1a200b", Pod:"calico-apiserver-76d788f98c-msd48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5a7c86d121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.500 [INFO][6066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.500 [INFO][6066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" iface="eth0" netns="" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.500 [INFO][6066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.500 [INFO][6066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.612 [INFO][6074] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.613 [INFO][6074] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.613 [INFO][6074] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.634 [WARNING][6074] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.634 [INFO][6074] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" HandleID="k8s-pod-network.4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Workload="localhost-k8s-calico--apiserver--76d788f98c--msd48-eth0" Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.638 [INFO][6074] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:49.678859 containerd[1476]: 2026-01-17 00:44:49.652 [INFO][6066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a" Jan 17 00:44:49.678859 containerd[1476]: time="2026-01-17T00:44:49.678366621Z" level=info msg="TearDown network for sandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" successfully" Jan 17 00:44:49.700803 containerd[1476]: time="2026-01-17T00:44:49.700564373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:49.700803 containerd[1476]: time="2026-01-17T00:44:49.700662216Z" level=info msg="RemovePodSandbox \"4f4ebffe87c2ba4ae23da3d315fbe977e9610af8c9bf0f9b93b66e506782b70a\" returns successfully" Jan 17 00:44:49.701878 containerd[1476]: time="2026-01-17T00:44:49.701510248Z" level=info msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:49.876 [WARNING][6092] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0", GenerateName:"calico-kube-controllers-674c9b8465-", Namespace:"calico-system", SelfLink:"", UID:"084547cb-aa8f-42ba-b949-f26ba954f5f8", ResourceVersion:"1451", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674c9b8465", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031", Pod:"calico-kube-controllers-674c9b8465-rpks6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ccbe8b2286", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:49.877 [INFO][6092] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:49.877 [INFO][6092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" iface="eth0" netns="" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:49.877 [INFO][6092] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:49.877 [INFO][6092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:50.014 [INFO][6100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:50.014 [INFO][6100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:50.014 [INFO][6100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:50.037 [WARNING][6100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:50.037 [INFO][6100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:50.044 [INFO][6100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:50.058989 containerd[1476]: 2026-01-17 00:44:50.053 [INFO][6092] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.058989 containerd[1476]: time="2026-01-17T00:44:50.058808813Z" level=info msg="TearDown network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" successfully" Jan 17 00:44:50.058989 containerd[1476]: time="2026-01-17T00:44:50.058848107Z" level=info msg="StopPodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" returns successfully" Jan 17 00:44:50.061175 containerd[1476]: time="2026-01-17T00:44:50.060698360Z" level=info msg="RemovePodSandbox for \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" Jan 17 00:44:50.061175 containerd[1476]: time="2026-01-17T00:44:50.060741610Z" level=info msg="Forcibly stopping sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\"" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.216 [WARNING][6117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0", GenerateName:"calico-kube-controllers-674c9b8465-", Namespace:"calico-system", SelfLink:"", UID:"084547cb-aa8f-42ba-b949-f26ba954f5f8", ResourceVersion:"1451", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674c9b8465", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a5b0dad5da465ec4de677f9fd1bdee6d7c8c1d70f6b01fd8161dbbf86003031", Pod:"calico-kube-controllers-674c9b8465-rpks6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ccbe8b2286", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.217 [INFO][6117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.217 [INFO][6117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" iface="eth0" netns="" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.217 [INFO][6117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.217 [INFO][6117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.335 [INFO][6126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.339 [INFO][6126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.339 [INFO][6126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.368 [WARNING][6126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.368 [INFO][6126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" HandleID="k8s-pod-network.6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Workload="localhost-k8s-calico--kube--controllers--674c9b8465--rpks6-eth0" Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.394 [INFO][6126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:50.442926 containerd[1476]: 2026-01-17 00:44:50.413 [INFO][6117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8" Jan 17 00:44:50.442926 containerd[1476]: time="2026-01-17T00:44:50.430643693Z" level=info msg="TearDown network for sandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" successfully" Jan 17 00:44:50.445384 containerd[1476]: time="2026-01-17T00:44:50.445036276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:50.445384 containerd[1476]: time="2026-01-17T00:44:50.445184705Z" level=info msg="RemovePodSandbox \"6be080916150076f9ca0528aff5c760d5696351db93e6266db2cefc5e47afba8\" returns successfully" Jan 17 00:44:50.445989 containerd[1476]: time="2026-01-17T00:44:50.445941801Z" level=info msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.541 [WARNING][6143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ae5c95-165c-41b7-b9c1-05cec94160e8", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5", Pod:"calico-apiserver-76d788f98c-gwfnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fb6b0067fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.542 [INFO][6143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.542 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" iface="eth0" netns="" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.542 [INFO][6143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.542 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.689 [INFO][6152] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.690 [INFO][6152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.690 [INFO][6152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.706 [WARNING][6152] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.706 [INFO][6152] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.710 [INFO][6152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:50.744893 containerd[1476]: 2026-01-17 00:44:50.730 [INFO][6143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:50.744893 containerd[1476]: time="2026-01-17T00:44:50.742755292Z" level=info msg="TearDown network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" successfully" Jan 17 00:44:50.744893 containerd[1476]: time="2026-01-17T00:44:50.742793774Z" level=info msg="StopPodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" returns successfully" Jan 17 00:44:50.744893 containerd[1476]: time="2026-01-17T00:44:50.743851049Z" level=info msg="RemovePodSandbox for \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" Jan 17 00:44:50.744893 containerd[1476]: time="2026-01-17T00:44:50.743890734Z" level=info msg="Forcibly stopping sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\"" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:50.985 [WARNING][6169] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0", GenerateName:"calico-apiserver-76d788f98c-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ae5c95-165c-41b7-b9c1-05cec94160e8", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d788f98c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dc6e6dbfa33df7a97f9ffe32e9cc7492742ceec5da09f4750990b08dbdbf2d5", Pod:"calico-apiserver-76d788f98c-gwfnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fb6b0067fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:50.986 [INFO][6169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:50.986 [INFO][6169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" iface="eth0" netns="" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:50.986 [INFO][6169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:50.986 [INFO][6169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:51.162 [INFO][6177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:51.164 [INFO][6177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:51.170 [INFO][6177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:51.195 [WARNING][6177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:51.195 [INFO][6177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" HandleID="k8s-pod-network.6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Workload="localhost-k8s-calico--apiserver--76d788f98c--gwfnp-eth0" Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:51.203 [INFO][6177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:51.226493 containerd[1476]: 2026-01-17 00:44:51.211 [INFO][6169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed" Jan 17 00:44:51.226493 containerd[1476]: time="2026-01-17T00:44:51.226021233Z" level=info msg="TearDown network for sandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" successfully" Jan 17 00:44:51.266428 containerd[1476]: time="2026-01-17T00:44:51.264785889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:51.266428 containerd[1476]: time="2026-01-17T00:44:51.264888773Z" level=info msg="RemovePodSandbox \"6e99656e24581ae429ab187a3b4ec70159f7e0e3432782654912334d12a348ed\" returns successfully" Jan 17 00:44:51.266428 containerd[1476]: time="2026-01-17T00:44:51.265677704Z" level=info msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.434 [WARNING][6193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--n22c9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91", ResourceVersion:"1443", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc", Pod:"goldmane-7c778bb748-n22c9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ae8041c022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.439 [INFO][6193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.439 [INFO][6193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" iface="eth0" netns="" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.439 [INFO][6193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.439 [INFO][6193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.579 [INFO][6201] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.580 [INFO][6201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.585 [INFO][6201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.604 [WARNING][6201] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.604 [INFO][6201] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.608 [INFO][6201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:51.635738 containerd[1476]: 2026-01-17 00:44:51.622 [INFO][6193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.635738 containerd[1476]: time="2026-01-17T00:44:51.635423443Z" level=info msg="TearDown network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" successfully" Jan 17 00:44:51.635738 containerd[1476]: time="2026-01-17T00:44:51.635500307Z" level=info msg="StopPodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" returns successfully" Jan 17 00:44:51.641558 containerd[1476]: time="2026-01-17T00:44:51.641361397Z" level=info msg="RemovePodSandbox for \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" Jan 17 00:44:51.641558 containerd[1476]: time="2026-01-17T00:44:51.641401422Z" level=info msg="Forcibly stopping sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\"" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.828 [WARNING][6219] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--n22c9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ec63a8db-6e49-4fec-8b7a-9f9042c1bf91", ResourceVersion:"1443", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdd253b64b2fddc10848669327ba52b0b240996bfb9b1749384418443f5aafc", Pod:"goldmane-7c778bb748-n22c9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ae8041c022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.829 [INFO][6219] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.829 [INFO][6219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" iface="eth0" netns="" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.829 [INFO][6219] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.829 [INFO][6219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.903 [INFO][6227] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.905 [INFO][6227] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.908 [INFO][6227] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.923 [WARNING][6227] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.924 [INFO][6227] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" HandleID="k8s-pod-network.c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Workload="localhost-k8s-goldmane--7c778bb748--n22c9-eth0" Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.928 [INFO][6227] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:51.952415 containerd[1476]: 2026-01-17 00:44:51.938 [INFO][6219] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f" Jan 17 00:44:51.952415 containerd[1476]: time="2026-01-17T00:44:51.948862031Z" level=info msg="TearDown network for sandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" successfully" Jan 17 00:44:51.962762 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:58060.service - OpenSSH per-connection server daemon (10.0.0.1:58060). Jan 17 00:44:51.986954 containerd[1476]: time="2026-01-17T00:44:51.986602755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:51.986954 containerd[1476]: time="2026-01-17T00:44:51.986720796Z" level=info msg="RemovePodSandbox \"c60e2f5feeb3eae5968f218fb98a1d4c77d809076258d649d8582b2e9e142f5f\" returns successfully" Jan 17 00:44:51.987724 containerd[1476]: time="2026-01-17T00:44:51.987652866Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:44:52.171940 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 58060 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:52.179924 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:52.213608 systemd-logind[1459]: New session 15 of user core. Jan 17 00:44:52.229011 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.142 [WARNING][6246] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.143 [INFO][6246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.143 [INFO][6246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" iface="eth0" netns="" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.143 [INFO][6246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.143 [INFO][6246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.246 [INFO][6255] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.248 [INFO][6255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.248 [INFO][6255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.285 [WARNING][6255] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.285 [INFO][6255] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.304 [INFO][6255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:52.329056 containerd[1476]: 2026-01-17 00:44:52.317 [INFO][6246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.329056 containerd[1476]: time="2026-01-17T00:44:52.328157374Z" level=info msg="TearDown network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" successfully" Jan 17 00:44:52.329056 containerd[1476]: time="2026-01-17T00:44:52.328193452Z" level=info msg="StopPodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" returns successfully" Jan 17 00:44:52.333596 containerd[1476]: time="2026-01-17T00:44:52.333489834Z" level=info msg="RemovePodSandbox for \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:44:52.333596 containerd[1476]: time="2026-01-17T00:44:52.333557551Z" level=info msg="Forcibly stopping sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\"" Jan 17 00:44:52.781673 sshd[6235]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:52.803476 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:58060.service: Deactivated successfully. Jan 17 00:44:52.825433 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:44:52.831212 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:44:52.835972 systemd-logind[1459]: Removed session 15. Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.566 [WARNING][6281] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.566 [INFO][6281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.566 [INFO][6281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" iface="eth0" netns="" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.566 [INFO][6281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.566 [INFO][6281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.771 [INFO][6290] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.771 [INFO][6290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.771 [INFO][6290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.817 [WARNING][6290] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.818 [INFO][6290] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" HandleID="k8s-pod-network.abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.823 [INFO][6290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:52.847044 containerd[1476]: 2026-01-17 00:44:52.834 [INFO][6281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a" Jan 17 00:44:52.848159 containerd[1476]: time="2026-01-17T00:44:52.848078110Z" level=info msg="TearDown network for sandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" successfully" Jan 17 00:44:52.879865 containerd[1476]: time="2026-01-17T00:44:52.879735245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:52.879865 containerd[1476]: time="2026-01-17T00:44:52.879837467Z" level=info msg="RemovePodSandbox \"abf908e2cc6c6c8dbcacd693bc61286868209c87b675a20db125214a373de95a\" returns successfully" Jan 17 00:44:52.880560 containerd[1476]: time="2026-01-17T00:44:52.880488626Z" level=info msg="StopPodSandbox for \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\"" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.086 [WARNING][6309] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.086 [INFO][6309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.086 [INFO][6309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" iface="eth0" netns="" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.086 [INFO][6309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.086 [INFO][6309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.176 [INFO][6317] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.177 [INFO][6317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.177 [INFO][6317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.201 [WARNING][6317] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.202 [INFO][6317] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.219 [INFO][6317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:53.246861 containerd[1476]: 2026-01-17 00:44:53.233 [INFO][6309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.247559 containerd[1476]: time="2026-01-17T00:44:53.246918102Z" level=info msg="TearDown network for sandbox \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\" successfully" Jan 17 00:44:53.247559 containerd[1476]: time="2026-01-17T00:44:53.246953999Z" level=info msg="StopPodSandbox for \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\" returns successfully" Jan 17 00:44:53.247623 containerd[1476]: time="2026-01-17T00:44:53.247558424Z" level=info msg="RemovePodSandbox for \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\"" Jan 17 00:44:53.247623 containerd[1476]: time="2026-01-17T00:44:53.247591876Z" level=info msg="Forcibly stopping sandbox \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\"" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.388 [WARNING][6332] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" WorkloadEndpoint="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.389 [INFO][6332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.389 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" iface="eth0" netns="" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.389 [INFO][6332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.389 [INFO][6332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.537 [INFO][6340] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.542 [INFO][6340] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.542 [INFO][6340] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.585 [WARNING][6340] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.589 [INFO][6340] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" HandleID="k8s-pod-network.c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Workload="localhost-k8s-whisker--6588bf47fd--nsnxs-eth0" Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.605 [INFO][6340] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:53.637805 containerd[1476]: 2026-01-17 00:44:53.624 [INFO][6332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5" Jan 17 00:44:53.641286 containerd[1476]: time="2026-01-17T00:44:53.638725912Z" level=info msg="TearDown network for sandbox \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\" successfully" Jan 17 00:44:53.694891 containerd[1476]: time="2026-01-17T00:44:53.694042483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:53.694891 containerd[1476]: time="2026-01-17T00:44:53.694220578Z" level=info msg="RemovePodSandbox \"c96d15088ce6459429de876090d0945fe94925c33fc419edebfba259a33108a5\" returns successfully" Jan 17 00:44:53.696229 containerd[1476]: time="2026-01-17T00:44:53.695792179Z" level=info msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:53.967 [WARNING][6358] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jdngt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa61c0c6-a39e-4c93-94a9-44f82847e39a", ResourceVersion:"1441", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58", Pod:"csi-node-driver-jdngt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d9070d2051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:53.968 [INFO][6358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:53.968 [INFO][6358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" iface="eth0" netns="" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:53.968 [INFO][6358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:53.968 [INFO][6358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:54.199 [INFO][6366] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:54.199 [INFO][6366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:54.200 [INFO][6366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:54.241 [WARNING][6366] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:54.241 [INFO][6366] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:54.264 [INFO][6366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:54.293429 containerd[1476]: 2026-01-17 00:44:54.280 [INFO][6358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.293429 containerd[1476]: time="2026-01-17T00:44:54.293235956Z" level=info msg="TearDown network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" successfully" Jan 17 00:44:54.293429 containerd[1476]: time="2026-01-17T00:44:54.293268397Z" level=info msg="StopPodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" returns successfully" Jan 17 00:44:54.296184 containerd[1476]: time="2026-01-17T00:44:54.295233976Z" level=info msg="RemovePodSandbox for \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" Jan 17 00:44:54.296184 containerd[1476]: time="2026-01-17T00:44:54.295269603Z" level=info msg="Forcibly stopping sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\"" Jan 17 00:44:54.658577 containerd[1476]: time="2026-01-17T00:44:54.656810123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.477 [WARNING][6384] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jdngt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa61c0c6-a39e-4c93-94a9-44f82847e39a", ResourceVersion:"1441", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb286b561aafab7b5350aa7cda44c9d33179267c94b13dc75a4982fb0a5a5b58", Pod:"csi-node-driver-jdngt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d9070d2051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.482 [INFO][6384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.482 [INFO][6384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" iface="eth0" netns="" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.482 [INFO][6384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.482 [INFO][6384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.639 [INFO][6393] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.639 [INFO][6393] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.639 [INFO][6393] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.676 [WARNING][6393] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.676 [INFO][6393] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" HandleID="k8s-pod-network.ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Workload="localhost-k8s-csi--node--driver--jdngt-eth0" Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.691 [INFO][6393] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:54.706205 containerd[1476]: 2026-01-17 00:44:54.699 [INFO][6384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac" Jan 17 00:44:54.706835 containerd[1476]: time="2026-01-17T00:44:54.706262604Z" level=info msg="TearDown network for sandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" successfully" Jan 17 00:44:54.742567 containerd[1476]: time="2026-01-17T00:44:54.742222167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:54.742567 containerd[1476]: time="2026-01-17T00:44:54.742367290Z" level=info msg="RemovePodSandbox \"ff25e9ce28b73a3416b3ab7f8ee8f4fa8b63f5ab4ac67c37020a31144dfa20ac\" returns successfully" Jan 17 00:44:54.744787 containerd[1476]: time="2026-01-17T00:44:54.744568883Z" level=info msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" Jan 17 00:44:54.763905 containerd[1476]: time="2026-01-17T00:44:54.762612003Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:54.768524 containerd[1476]: time="2026-01-17T00:44:54.766682692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:54.768524 containerd[1476]: time="2026-01-17T00:44:54.766778352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:54.768707 kubelet[2596]: E0117 00:44:54.767019 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:54.768707 kubelet[2596]: E0117 00:44:54.767081 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:54.768707 kubelet[2596]: E0117 00:44:54.767254 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-gwfnp_calico-apiserver(61ae5c95-165c-41b7-b9c1-05cec94160e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:54.768707 kubelet[2596]: E0117 00:44:54.767302 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.036 [WARNING][6409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j7s62-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1abfdd34-176a-4bd5-8495-196edf2ca012", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37", Pod:"coredns-66bc5c9577-j7s62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie54942141b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.037 [INFO][6409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.037 [INFO][6409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" iface="eth0" netns="" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.037 [INFO][6409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.038 [INFO][6409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.146 [INFO][6418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.146 [INFO][6418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.147 [INFO][6418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.165 [WARNING][6418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.168 [INFO][6418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.177 [INFO][6418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:55.218082 containerd[1476]: 2026-01-17 00:44:55.202 [INFO][6409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.218082 containerd[1476]: time="2026-01-17T00:44:55.217927936Z" level=info msg="TearDown network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" successfully" Jan 17 00:44:55.218082 containerd[1476]: time="2026-01-17T00:44:55.217960717Z" level=info msg="StopPodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" returns successfully" Jan 17 00:44:55.219080 containerd[1476]: time="2026-01-17T00:44:55.218793514Z" level=info msg="RemovePodSandbox for \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" Jan 17 00:44:55.219080 containerd[1476]: time="2026-01-17T00:44:55.218829892Z" level=info msg="Forcibly stopping sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\"" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.350 [WARNING][6437] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j7s62-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1abfdd34-176a-4bd5-8495-196edf2ca012", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e337b008e51826dba8907fa87a149e876ee36d23853a8978dd5b7eecd6f4f37", Pod:"coredns-66bc5c9577-j7s62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie54942141b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.351 [INFO][6437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.351 [INFO][6437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" iface="eth0" netns="" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.351 [INFO][6437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.351 [INFO][6437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.439 [INFO][6445] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.440 [INFO][6445] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.441 [INFO][6445] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.464 [WARNING][6445] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.464 [INFO][6445] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" HandleID="k8s-pod-network.f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Workload="localhost-k8s-coredns--66bc5c9577--j7s62-eth0" Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.469 [INFO][6445] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:44:55.486724 containerd[1476]: 2026-01-17 00:44:55.473 [INFO][6437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854" Jan 17 00:44:55.486724 containerd[1476]: time="2026-01-17T00:44:55.477873464Z" level=info msg="TearDown network for sandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" successfully" Jan 17 00:44:55.500830 containerd[1476]: time="2026-01-17T00:44:55.498948842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:44:55.500830 containerd[1476]: time="2026-01-17T00:44:55.499354283Z" level=info msg="RemovePodSandbox \"f8ad8f5477e534da23948223923a1e7c04aeb5c069a78123d8166e6b87a20854\" returns successfully" Jan 17 00:44:55.651781 containerd[1476]: time="2026-01-17T00:44:55.651177771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:44:55.765780 containerd[1476]: time="2026-01-17T00:44:55.765204973Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:55.777601 containerd[1476]: time="2026-01-17T00:44:55.777280392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:44:55.777601 containerd[1476]: time="2026-01-17T00:44:55.777441746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:44:55.777993 kubelet[2596]: E0117 00:44:55.777830 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:55.777993 kubelet[2596]: E0117 00:44:55.777925 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:55.780540 kubelet[2596]: E0117 00:44:55.778032 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:55.783176 containerd[1476]: time="2026-01-17T00:44:55.781076030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:44:55.878832 containerd[1476]: time="2026-01-17T00:44:55.878740825Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:55.886997 containerd[1476]: time="2026-01-17T00:44:55.884545188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:44:55.886997 containerd[1476]: time="2026-01-17T00:44:55.884806408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:44:55.887258 kubelet[2596]: E0117 00:44:55.885138 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:55.887258 kubelet[2596]: E0117 00:44:55.885213 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:55.887258 kubelet[2596]: E0117 00:44:55.885361 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:55.887552 kubelet[2596]: E0117 00:44:55.885428 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:44:56.651154 containerd[1476]: time="2026-01-17T00:44:56.650860059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:44:56.827587 containerd[1476]: time="2026-01-17T00:44:56.827271619Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:56.836220 containerd[1476]: time="2026-01-17T00:44:56.835868716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:44:56.836220 containerd[1476]: time="2026-01-17T00:44:56.836007878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:56.843672 kubelet[2596]: E0117 00:44:56.840332 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:56.843672 kubelet[2596]: E0117 00:44:56.840459 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:56.843672 kubelet[2596]: E0117 00:44:56.840730 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n22c9_calico-system(ec63a8db-6e49-4fec-8b7a-9f9042c1bf91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:56.843672 kubelet[2596]: E0117 00:44:56.840837 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:44:57.651936 containerd[1476]: time="2026-01-17T00:44:57.651857711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:44:57.751276 containerd[1476]: time="2026-01-17T00:44:57.751079385Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:57.774466 containerd[1476]: time="2026-01-17T00:44:57.771595869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:44:57.774466 containerd[1476]: time="2026-01-17T00:44:57.771623911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:57.776534 kubelet[2596]: E0117 00:44:57.771974 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:57.776534 kubelet[2596]: E0117 00:44:57.772068 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:57.776534 kubelet[2596]: E0117 00:44:57.772215 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-674c9b8465-rpks6_calico-system(084547cb-aa8f-42ba-b949-f26ba954f5f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:57.776534 kubelet[2596]: E0117 00:44:57.772259 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:44:57.846249 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:44174.service - OpenSSH per-connection server daemon (10.0.0.1:44174). Jan 17 00:44:57.980403 sshd[6454]: Accepted publickey for core from 10.0.0.1 port 44174 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:57.986851 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:58.016000 systemd-logind[1459]: New session 16 of user core. Jan 17 00:44:58.034917 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:44:58.558731 sshd[6454]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:58.573790 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:44174.service: Deactivated successfully. Jan 17 00:44:58.585270 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:44:58.592255 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:44:58.604319 systemd-logind[1459]: Removed session 16. Jan 17 00:44:59.576274 update_engine[1460]: I20260117 00:44:59.575790 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:44:59.586798 update_engine[1460]: I20260117 00:44:59.576540 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:44:59.586798 update_engine[1460]: I20260117 00:44:59.579361 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:44:59.608310 update_engine[1460]: E20260117 00:44:59.606822 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.606947 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607012 1460 omaha_request_action.cc:617] Omaha request response: Jan 17 00:44:59.608310 update_engine[1460]: E20260117 00:44:59.607239 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607295 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607307 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607317 1460 update_attempter.cc:306] Processing Done. Jan 17 00:44:59.608310 update_engine[1460]: E20260117 00:44:59.607424 1460 update_attempter.cc:619] Update failed. Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607443 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607455 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607465 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607570 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607609 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:44:59.608310 update_engine[1460]: I20260117 00:44:59.607625 1460 omaha_request_action.cc:272] Request: Jan 17 00:44:59.608310 update_engine[1460]: Jan 17 00:44:59.608310 update_engine[1460]: Jan 17 00:44:59.613155 update_engine[1460]: Jan 17 00:44:59.613155 update_engine[1460]: Jan 17 00:44:59.613155 update_engine[1460]: Jan 17 00:44:59.613155 update_engine[1460]: Jan 17 00:44:59.613155 update_engine[1460]: I20260117 00:44:59.607637 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:44:59.613155 update_engine[1460]: I20260117 00:44:59.607975 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:44:59.613155 update_engine[1460]: I20260117 00:44:59.608258 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:44:59.617362 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:44:59.632900 update_engine[1460]: E20260117 00:44:59.629527 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:44:59.632900 update_engine[1460]: I20260117 00:44:59.629657 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:44:59.632900 update_engine[1460]: I20260117 00:44:59.629673 1460 omaha_request_action.cc:617] Omaha request response: Jan 17 00:44:59.632900 update_engine[1460]: I20260117 00:44:59.629686 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:44:59.632900 update_engine[1460]: I20260117 00:44:59.629696 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:44:59.632900 update_engine[1460]: I20260117 00:44:59.629705 1460 update_attempter.cc:306] Processing Done. Jan 17 00:44:59.632900 update_engine[1460]: I20260117 00:44:59.629717 1460 update_attempter.cc:310] Error event sent. Jan 17 00:44:59.632900 update_engine[1460]: I20260117 00:44:59.629734 1460 update_check_scheduler.cc:74] Next update check in 48m22s Jan 17 00:44:59.634438 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:44:59.651864 containerd[1476]: time="2026-01-17T00:44:59.649312069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:59.761363 containerd[1476]: time="2026-01-17T00:44:59.760882570Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:59.770582 containerd[1476]: time="2026-01-17T00:44:59.770515497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:59.771444 containerd[1476]: time="2026-01-17T00:44:59.770798298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:59.771610 kubelet[2596]: E0117 00:44:59.771009 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:59.771610 kubelet[2596]: E0117 00:44:59.771070 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:59.776193 kubelet[2596]: E0117 00:44:59.773509 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-msd48_calico-apiserver(09a01101-a646-4d50-93a3-7a41aecfea23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:59.776193 kubelet[2596]: E0117 00:44:59.773570 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:45:00.656328 containerd[1476]: time="2026-01-17T00:45:00.655974762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:45:00.820854 containerd[1476]: time="2026-01-17T00:45:00.817785190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:00.826652 containerd[1476]: time="2026-01-17T00:45:00.825597381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:45:00.826652 containerd[1476]: time="2026-01-17T00:45:00.825728377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:45:00.827841 kubelet[2596]: E0117 00:45:00.827567 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:45:00.833278 kubelet[2596]: E0117 00:45:00.828748 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:45:00.833278 kubelet[2596]: E0117 00:45:00.828882 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:00.841671 containerd[1476]: time="2026-01-17T00:45:00.840318170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:45:00.925464 containerd[1476]: time="2026-01-17T00:45:00.925257166Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:00.936823 containerd[1476]: time="2026-01-17T00:45:00.936640281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:45:00.936823 containerd[1476]: time="2026-01-17T00:45:00.936763202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:45:00.939767 kubelet[2596]: E0117 00:45:00.937292 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:45:00.939767 kubelet[2596]: E0117 00:45:00.937351 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:45:00.939767 kubelet[2596]: E0117 00:45:00.937490 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:00.940043 kubelet[2596]: E0117 00:45:00.937558 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:45:03.629197 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:46070.service - OpenSSH per-connection server daemon (10.0.0.1:46070). Jan 17 00:45:03.645003 kubelet[2596]: E0117 00:45:03.640067 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:03.709578 sshd[6475]: Accepted publickey for core from 10.0.0.1 port 46070 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:03.710575 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:03.735812 systemd-logind[1459]: New session 17 of user core. Jan 17 00:45:03.748059 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:45:04.151717 sshd[6475]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:04.197878 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:46070.service: Deactivated successfully. Jan 17 00:45:04.208412 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:45:04.213342 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:45:04.266913 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:46076.service - OpenSSH per-connection server daemon (10.0.0.1:46076). Jan 17 00:45:04.269941 systemd-logind[1459]: Removed session 17. Jan 17 00:45:04.351280 sshd[6490]: Accepted publickey for core from 10.0.0.1 port 46076 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:04.355900 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:04.384284 systemd-logind[1459]: New session 18 of user core. Jan 17 00:45:04.392964 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:45:04.943545 sshd[6490]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:04.970392 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:46076.service: Deactivated successfully. Jan 17 00:45:04.976857 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:45:04.993648 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:45:05.025250 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:46086.service - OpenSSH per-connection server daemon (10.0.0.1:46086). Jan 17 00:45:05.036850 systemd-logind[1459]: Removed session 18. Jan 17 00:45:05.125910 sshd[6505]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:05.132040 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:05.154763 systemd-logind[1459]: New session 19 of user core. Jan 17 00:45:05.181132 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:45:05.670493 sshd[6505]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:05.731196 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:46086.service: Deactivated successfully. Jan 17 00:45:05.743405 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:45:05.764305 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:45:05.775343 systemd-logind[1459]: Removed session 19. Jan 17 00:45:06.657331 kubelet[2596]: E0117 00:45:06.656300 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:45:07.692323 kubelet[2596]: E0117 00:45:07.691552 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:45:11.462017 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:46100.service - OpenSSH per-connection server daemon (10.0.0.1:46100). Jan 17 00:45:11.529144 kubelet[2596]: E0117 00:45:11.528243 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:11.537207 kubelet[2596]: E0117 00:45:11.535311 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:45:11.541435 kubelet[2596]: E0117 00:45:11.541389 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:45:12.048953 kubelet[2596]: E0117 00:45:12.045487 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:45:12.102287 sshd[6544]: Accepted publickey for core from 10.0.0.1 port 46100 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:12.122872 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:12.152417 systemd-logind[1459]: New session 20 of user core. Jan 17 00:45:12.173362 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:45:12.946731 kubelet[2596]: E0117 00:45:12.946559 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:45:13.634561 sshd[6544]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:13.650926 kubelet[2596]: E0117 00:45:13.639781 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:13.651995 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:46100.service: Deactivated successfully. Jan 17 00:45:13.708318 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:45:13.713952 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:45:13.726015 systemd-logind[1459]: Removed session 20. Jan 17 00:45:18.675201 kubelet[2596]: E0117 00:45:18.673002 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:45:18.719508 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:46620.service - OpenSSH per-connection server daemon (10.0.0.1:46620). Jan 17 00:45:18.856245 sshd[6561]: Accepted publickey for core from 10.0.0.1 port 46620 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:18.874824 sshd[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:18.907277 systemd-logind[1459]: New session 21 of user core. Jan 17 00:45:18.928476 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:45:19.535483 sshd[6561]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:19.551365 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:46620.service: Deactivated successfully. Jan 17 00:45:19.585193 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:45:19.599948 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:45:19.612983 systemd-logind[1459]: Removed session 21. Jan 17 00:45:19.683443 kubelet[2596]: E0117 00:45:19.683389 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:45:23.644563 kubelet[2596]: E0117 00:45:23.643584 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:45:24.553527 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:54796.service - OpenSSH per-connection server daemon (10.0.0.1:54796). Jan 17 00:45:24.626241 sshd[6575]: Accepted publickey for core from 10.0.0.1 port 54796 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:24.629552 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:24.642408 systemd-logind[1459]: New session 22 of user core. Jan 17 00:45:24.663023 kubelet[2596]: E0117 00:45:24.643918 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:45:24.663602 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:45:24.999854 sshd[6575]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:25.016202 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:45:25.020955 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:54796.service: Deactivated successfully. Jan 17 00:45:25.036280 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:45:25.048822 systemd-logind[1459]: Removed session 22. Jan 17 00:45:26.668791 kubelet[2596]: E0117 00:45:26.668566 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:45:27.646413 kubelet[2596]: E0117 00:45:27.646287 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:45:29.639785 kubelet[2596]: E0117 00:45:29.639613 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:30.050367 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:54800.service - OpenSSH per-connection server daemon (10.0.0.1:54800). Jan 17 00:45:30.179885 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 54800 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:30.183028 sshd[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:30.197613 systemd-logind[1459]: New session 23 of user core. Jan 17 00:45:30.210834 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:45:30.483684 sshd[6597]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:30.496619 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:54800.service: Deactivated successfully. Jan 17 00:45:30.500772 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:45:30.506457 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:45:30.508608 systemd-logind[1459]: Removed session 23. Jan 17 00:45:30.656074 kubelet[2596]: E0117 00:45:30.654345 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:45:35.536692 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:58472.service - OpenSSH per-connection server daemon (10.0.0.1:58472). Jan 17 00:45:35.653028 containerd[1476]: time="2026-01-17T00:45:35.650551180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:45:35.775014 sshd[6637]: Accepted publickey for core from 10.0.0.1 port 58472 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:35.781920 sshd[6637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:35.797053 containerd[1476]: time="2026-01-17T00:45:35.796741988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:35.799362 containerd[1476]: time="2026-01-17T00:45:35.799152569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:45:35.799362 containerd[1476]: time="2026-01-17T00:45:35.799318770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:45:35.800537 kubelet[2596]: E0117 00:45:35.799581 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:45:35.800537 kubelet[2596]: E0117 00:45:35.799644 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:45:35.800537 kubelet[2596]: E0117 00:45:35.799741 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-gwfnp_calico-apiserver(61ae5c95-165c-41b7-b9c1-05cec94160e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:35.800537 kubelet[2596]: E0117 00:45:35.799839 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:45:35.811353 systemd-logind[1459]: New session 24 of user core. Jan 17 00:45:35.826386 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:45:36.213713 sshd[6637]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:36.229376 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:58472.service: Deactivated successfully. Jan 17 00:45:36.236539 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:45:36.239632 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:45:36.242832 systemd-logind[1459]: Removed session 24. Jan 17 00:45:36.650341 kubelet[2596]: E0117 00:45:36.649683 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:45:36.656332 kubelet[2596]: E0117 00:45:36.656207 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:45:37.641295 containerd[1476]: time="2026-01-17T00:45:37.640814669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:45:37.768440 containerd[1476]: time="2026-01-17T00:45:37.767871154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:37.776493 containerd[1476]: time="2026-01-17T00:45:37.773450360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:45:37.776493 containerd[1476]: time="2026-01-17T00:45:37.774181833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:45:37.783818 kubelet[2596]: E0117 00:45:37.775986 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:45:37.783818 kubelet[2596]: E0117 00:45:37.778199 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:45:37.783818 kubelet[2596]: E0117 00:45:37.778335 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n22c9_calico-system(ec63a8db-6e49-4fec-8b7a-9f9042c1bf91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:37.783818 kubelet[2596]: E0117 00:45:37.778391 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:45:41.304184 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:58480.service - OpenSSH per-connection server daemon (10.0.0.1:58480). Jan 17 00:45:41.410132 sshd[6666]: Accepted publickey for core from 10.0.0.1 port 58480 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:41.414047 sshd[6666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:41.435658 systemd-logind[1459]: New session 25 of user core. Jan 17 00:45:41.483809 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:45:41.643685 containerd[1476]: time="2026-01-17T00:45:41.642664988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:45:41.728079 containerd[1476]: time="2026-01-17T00:45:41.727822755Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:41.731286 containerd[1476]: time="2026-01-17T00:45:41.731165102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:45:41.731574 containerd[1476]: time="2026-01-17T00:45:41.731280318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:45:41.734968 kubelet[2596]: E0117 00:45:41.732152 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:45:41.734968 kubelet[2596]: E0117 00:45:41.732221 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:45:41.734968 kubelet[2596]: E0117 00:45:41.732438 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:41.735597 containerd[1476]: time="2026-01-17T00:45:41.733372293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:45:41.813980 containerd[1476]: time="2026-01-17T00:45:41.813491731Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:41.817046 containerd[1476]: time="2026-01-17T00:45:41.816906966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:45:41.817258 containerd[1476]: time="2026-01-17T00:45:41.817032121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:45:41.817524 kubelet[2596]: E0117 00:45:41.817426 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:45:41.817524 kubelet[2596]: E0117 00:45:41.817492 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:45:41.819829 kubelet[2596]: E0117 00:45:41.818036 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76d788f98c-msd48_calico-apiserver(09a01101-a646-4d50-93a3-7a41aecfea23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:41.819829 kubelet[2596]: E0117 00:45:41.818159 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:45:41.827701 containerd[1476]: time="2026-01-17T00:45:41.820528626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:45:41.869761 sshd[6666]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:41.893082 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:58480.service: Deactivated successfully. Jan 17 00:45:41.902196 containerd[1476]: time="2026-01-17T00:45:41.901750229Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:41.903525 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:45:41.905024 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:45:41.906616 kubelet[2596]: E0117 00:45:41.905611 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:45:41.906616 kubelet[2596]: E0117 00:45:41.905674 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:45:41.906616 kubelet[2596]: E0117 00:45:41.905767 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jdngt_calico-system(fa61c0c6-a39e-4c93-94a9-44f82847e39a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:41.906799 containerd[1476]: time="2026-01-17T00:45:41.905274967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:45:41.906799 containerd[1476]: time="2026-01-17T00:45:41.905422844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:45:41.908727 kubelet[2596]: E0117 00:45:41.905823 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:45:41.910897 systemd-logind[1459]: Removed session 25. Jan 17 00:45:46.649545 kubelet[2596]: E0117 00:45:46.643408 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:46.656308 kubelet[2596]: E0117 00:45:46.654795 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:46.974820 systemd[1]: Started sshd@25-10.0.0.115:22-10.0.0.1:47872.service - OpenSSH per-connection server daemon (10.0.0.1:47872). Jan 17 00:45:47.106386 sshd[6695]: Accepted publickey for core from 10.0.0.1 port 47872 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:47.109944 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:47.124181 systemd-logind[1459]: New session 26 of user core. Jan 17 00:45:47.145702 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:45:47.494712 sshd[6695]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:47.513330 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:47872.service: Deactivated successfully. Jan 17 00:45:47.527712 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:45:47.541329 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:45:47.549530 systemd-logind[1459]: Removed session 26. Jan 17 00:45:47.640660 kubelet[2596]: E0117 00:45:47.640023 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:47.645158 containerd[1476]: time="2026-01-17T00:45:47.644497122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:45:47.782336 containerd[1476]: time="2026-01-17T00:45:47.779286700Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:47.786408 containerd[1476]: time="2026-01-17T00:45:47.786240604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:45:47.786555 containerd[1476]: time="2026-01-17T00:45:47.786314145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:45:47.789241 kubelet[2596]: E0117 00:45:47.788423 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:45:47.789241 kubelet[2596]: E0117 00:45:47.788526 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:45:47.789241 kubelet[2596]: E0117 00:45:47.788637 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-674c9b8465-rpks6_calico-system(084547cb-aa8f-42ba-b949-f26ba954f5f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:47.789241 kubelet[2596]: E0117 00:45:47.788682 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:45:48.663612 containerd[1476]: time="2026-01-17T00:45:48.663228867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:45:48.793945 containerd[1476]: time="2026-01-17T00:45:48.793564230Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:48.796535 containerd[1476]: time="2026-01-17T00:45:48.796258531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:45:48.796535 containerd[1476]: time="2026-01-17T00:45:48.796330328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:45:48.797068 kubelet[2596]: E0117 00:45:48.796981 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:45:48.797687 kubelet[2596]: E0117 00:45:48.797067 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:45:48.797687 kubelet[2596]: E0117 00:45:48.797293 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:48.801346 containerd[1476]: time="2026-01-17T00:45:48.799746206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:45:48.894884 containerd[1476]: time="2026-01-17T00:45:48.894372675Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:45:48.899563 containerd[1476]: time="2026-01-17T00:45:48.899476229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:45:48.899721 containerd[1476]: time="2026-01-17T00:45:48.899610992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:45:48.900048 kubelet[2596]: E0117 00:45:48.899957 2596 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:45:48.900169 kubelet[2596]: E0117 00:45:48.900054 2596 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:45:48.900360 kubelet[2596]: E0117 00:45:48.900221 2596 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-688bc4c644-qrndd_calico-system(ee6e145a-aa21-42ce-80af-75c3ba3e223d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:45:48.900360 kubelet[2596]: E0117 00:45:48.900325 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:45:49.644843 kubelet[2596]: E0117 00:45:49.643837 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:45:52.531316 systemd[1]: Started sshd@26-10.0.0.115:22-10.0.0.1:55756.service - OpenSSH per-connection server daemon (10.0.0.1:55756). Jan 17 00:45:52.626645 sshd[6714]: Accepted publickey for core from 10.0.0.1 port 55756 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:52.635268 sshd[6714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:52.650536 systemd-logind[1459]: New session 27 of user core. Jan 17 00:45:52.659507 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:45:52.970728 sshd[6714]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:52.978657 systemd[1]: sshd@26-10.0.0.115:22-10.0.0.1:55756.service: Deactivated successfully. Jan 17 00:45:52.986024 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:45:52.988403 systemd-logind[1459]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:45:52.997233 systemd-logind[1459]: Removed session 27. Jan 17 00:45:53.653657 kubelet[2596]: E0117 00:45:53.653146 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:45:55.643634 kubelet[2596]: E0117 00:45:55.643282 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:45:57.672142 kubelet[2596]: E0117 00:45:57.654639 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:45:58.041454 systemd[1]: Started sshd@27-10.0.0.115:22-10.0.0.1:55772.service - OpenSSH per-connection server daemon (10.0.0.1:55772). Jan 17 00:45:58.142436 sshd[6729]: Accepted publickey for core from 10.0.0.1 port 55772 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:58.147046 sshd[6729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:58.156565 systemd-logind[1459]: New session 28 of user core. Jan 17 00:45:58.175611 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:45:58.567939 sshd[6729]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:58.572286 systemd-logind[1459]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:45:58.573656 systemd[1]: sshd@27-10.0.0.115:22-10.0.0.1:55772.service: Deactivated successfully. Jan 17 00:45:58.585820 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:45:58.599448 systemd-logind[1459]: Removed session 28. Jan 17 00:46:00.648743 kubelet[2596]: E0117 00:46:00.646618 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:46:02.672173 kubelet[2596]: E0117 00:46:02.671634 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:46:03.657707 systemd[1]: Started sshd@28-10.0.0.115:22-10.0.0.1:49516.service - OpenSSH per-connection server daemon (10.0.0.1:49516). Jan 17 00:46:03.686504 kubelet[2596]: E0117 00:46:03.684273 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:46:03.786744 sshd[6746]: Accepted publickey for core from 10.0.0.1 port 49516 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:03.790530 sshd[6746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:03.818379 systemd-logind[1459]: New session 29 of user core. Jan 17 00:46:03.837494 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:46:04.277294 sshd[6746]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:04.290696 systemd[1]: sshd@28-10.0.0.115:22-10.0.0.1:49516.service: Deactivated successfully. Jan 17 00:46:04.294943 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:46:04.299211 systemd-logind[1459]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:46:04.301340 systemd-logind[1459]: Removed session 29. Jan 17 00:46:04.641482 kubelet[2596]: E0117 00:46:04.641219 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:08.647593 kubelet[2596]: E0117 00:46:08.646022 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:46:08.663654 kubelet[2596]: E0117 00:46:08.663603 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:46:09.357982 systemd[1]: Started sshd@29-10.0.0.115:22-10.0.0.1:49522.service - OpenSSH per-connection server daemon (10.0.0.1:49522). Jan 17 00:46:09.549434 sshd[6786]: Accepted publickey for core from 10.0.0.1 port 49522 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:09.556899 sshd[6786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:09.585180 systemd-logind[1459]: New session 30 of user core. Jan 17 00:46:09.599523 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:46:09.648045 kubelet[2596]: E0117 00:46:09.646777 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:46:10.241574 sshd[6786]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:10.276976 systemd[1]: sshd@29-10.0.0.115:22-10.0.0.1:49522.service: Deactivated successfully. Jan 17 00:46:10.288440 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:46:10.296519 systemd-logind[1459]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:46:10.314743 systemd[1]: Started sshd@30-10.0.0.115:22-10.0.0.1:49530.service - OpenSSH per-connection server daemon (10.0.0.1:49530). Jan 17 00:46:10.321413 systemd-logind[1459]: Removed session 30. Jan 17 00:46:10.556530 sshd[6801]: Accepted publickey for core from 10.0.0.1 port 49530 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:10.564517 sshd[6801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:10.603528 systemd-logind[1459]: New session 31 of user core. Jan 17 00:46:10.642808 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 00:46:12.272380 sshd[6801]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:12.313550 systemd[1]: sshd@30-10.0.0.115:22-10.0.0.1:49530.service: Deactivated successfully. Jan 17 00:46:12.318914 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 00:46:12.325085 systemd-logind[1459]: Session 31 logged out. Waiting for processes to exit. Jan 17 00:46:12.387065 systemd[1]: Started sshd@31-10.0.0.115:22-10.0.0.1:49540.service - OpenSSH per-connection server daemon (10.0.0.1:49540). Jan 17 00:46:12.416318 systemd-logind[1459]: Removed session 31. Jan 17 00:46:12.638069 sshd[6815]: Accepted publickey for core from 10.0.0.1 port 49540 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:12.640831 sshd[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:12.677950 systemd-logind[1459]: New session 32 of user core. Jan 17 00:46:12.780651 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 00:46:14.664987 kubelet[2596]: E0117 00:46:14.664925 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:46:14.673787 kubelet[2596]: E0117 00:46:14.666339 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:46:15.261167 sshd[6815]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:15.303055 systemd[1]: sshd@31-10.0.0.115:22-10.0.0.1:49540.service: Deactivated successfully. Jan 17 00:46:15.317075 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 00:46:15.317489 systemd[1]: session-32.scope: Consumed 1.078s CPU time. Jan 17 00:46:15.334475 systemd-logind[1459]: Session 32 logged out. Waiting for processes to exit. Jan 17 00:46:15.366776 systemd[1]: Started sshd@32-10.0.0.115:22-10.0.0.1:38778.service - OpenSSH per-connection server daemon (10.0.0.1:38778). Jan 17 00:46:15.376521 systemd-logind[1459]: Removed session 32. Jan 17 00:46:15.557329 sshd[6839]: Accepted publickey for core from 10.0.0.1 port 38778 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:15.565562 sshd[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:15.590880 systemd-logind[1459]: New session 33 of user core. Jan 17 00:46:15.604807 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 17 00:46:16.676030 sshd[6839]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:16.710023 systemd[1]: sshd@32-10.0.0.115:22-10.0.0.1:38778.service: Deactivated successfully. Jan 17 00:46:16.717936 systemd[1]: session-33.scope: Deactivated successfully. Jan 17 00:46:16.730497 systemd-logind[1459]: Session 33 logged out. Waiting for processes to exit. Jan 17 00:46:16.751130 systemd[1]: Started sshd@33-10.0.0.115:22-10.0.0.1:38792.service - OpenSSH per-connection server daemon (10.0.0.1:38792). Jan 17 00:46:16.756418 systemd-logind[1459]: Removed session 33. Jan 17 00:46:16.843443 sshd[6855]: Accepted publickey for core from 10.0.0.1 port 38792 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:16.846904 sshd[6855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:16.876575 systemd-logind[1459]: New session 34 of user core. Jan 17 00:46:16.883850 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 17 00:46:17.370679 sshd[6855]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:17.386775 systemd[1]: sshd@33-10.0.0.115:22-10.0.0.1:38792.service: Deactivated successfully. Jan 17 00:46:17.403081 systemd[1]: session-34.scope: Deactivated successfully. Jan 17 00:46:17.405465 systemd-logind[1459]: Session 34 logged out. Waiting for processes to exit. Jan 17 00:46:17.411557 systemd-logind[1459]: Removed session 34. Jan 17 00:46:18.649439 kubelet[2596]: E0117 00:46:18.649373 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:46:19.647632 kubelet[2596]: E0117 00:46:19.647499 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:46:21.644186 kubelet[2596]: E0117 00:46:21.643906 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:22.420246 systemd[1]: Started sshd@34-10.0.0.115:22-10.0.0.1:34300.service - OpenSSH per-connection server daemon (10.0.0.1:34300). Jan 17 00:46:22.569351 sshd[6870]: Accepted publickey for core from 10.0.0.1 port 34300 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:22.577499 sshd[6870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:22.597421 systemd-logind[1459]: New session 35 of user core. Jan 17 00:46:22.617475 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 17 00:46:22.978482 sshd[6870]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:23.014422 systemd[1]: sshd@34-10.0.0.115:22-10.0.0.1:34300.service: Deactivated successfully. Jan 17 00:46:23.017732 systemd[1]: session-35.scope: Deactivated successfully. Jan 17 00:46:23.020331 systemd-logind[1459]: Session 35 logged out. Waiting for processes to exit. Jan 17 00:46:23.022654 systemd-logind[1459]: Removed session 35. Jan 17 00:46:23.651580 kubelet[2596]: E0117 00:46:23.650192 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:46:24.653527 kubelet[2596]: E0117 00:46:24.652930 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:46:24.656195 kubelet[2596]: E0117 00:46:24.656062 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:25.642415 kubelet[2596]: E0117 00:46:25.641715 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:46:28.115077 systemd[1]: Started sshd@35-10.0.0.115:22-10.0.0.1:34316.service - OpenSSH per-connection server daemon (10.0.0.1:34316). Jan 17 00:46:28.415690 sshd[6887]: Accepted publickey for core from 10.0.0.1 port 34316 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:28.423988 sshd[6887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:28.507497 systemd-logind[1459]: New session 36 of user core. Jan 17 00:46:28.523981 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 17 00:46:29.057229 sshd[6887]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:29.080447 systemd[1]: sshd@35-10.0.0.115:22-10.0.0.1:34316.service: Deactivated successfully. Jan 17 00:46:29.102604 systemd[1]: session-36.scope: Deactivated successfully. Jan 17 00:46:29.122242 systemd-logind[1459]: Session 36 logged out. Waiting for processes to exit. Jan 17 00:46:29.130977 systemd-logind[1459]: Removed session 36. Jan 17 00:46:29.645148 kubelet[2596]: E0117 00:46:29.640195 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:29.654538 kubelet[2596]: E0117 00:46:29.654457 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:46:33.641794 kubelet[2596]: E0117 00:46:33.641595 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:46:33.647740 kubelet[2596]: E0117 00:46:33.645578 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:46:34.184180 systemd[1]: Started sshd@36-10.0.0.115:22-10.0.0.1:38820.service - OpenSSH per-connection server daemon (10.0.0.1:38820). Jan 17 00:46:34.442452 sshd[6904]: Accepted publickey for core from 10.0.0.1 port 38820 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:34.446050 sshd[6904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:34.508625 systemd-logind[1459]: New session 37 of user core. Jan 17 00:46:34.522503 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 17 00:46:35.307479 sshd[6904]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:35.400863 systemd[1]: run-containerd-runc-k8s.io-8377109b8079b543184508a4a3ba20bba2b49e18adf2255702043d597e3029e3-runc.bZV3MI.mount: Deactivated successfully. Jan 17 00:46:35.442878 systemd[1]: sshd@36-10.0.0.115:22-10.0.0.1:38820.service: Deactivated successfully. Jan 17 00:46:35.456177 systemd[1]: session-37.scope: Deactivated successfully. Jan 17 00:46:35.463928 systemd-logind[1459]: Session 37 logged out. Waiting for processes to exit. Jan 17 00:46:35.515049 systemd-logind[1459]: Removed session 37. Jan 17 00:46:35.687074 kubelet[2596]: E0117 00:46:35.684483 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:46:36.654619 kubelet[2596]: E0117 00:46:36.654036 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:46:36.655081 kubelet[2596]: E0117 00:46:36.655044 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:46:40.358731 systemd[1]: Started sshd@37-10.0.0.115:22-10.0.0.1:38824.service - OpenSSH per-connection server daemon (10.0.0.1:38824). Jan 17 00:46:40.430996 sshd[6944]: Accepted publickey for core from 10.0.0.1 port 38824 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:40.437532 sshd[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:40.451559 systemd-logind[1459]: New session 38 of user core. Jan 17 00:46:40.462522 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 17 00:46:40.775508 sshd[6944]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:40.786350 systemd-logind[1459]: Session 38 logged out. Waiting for processes to exit. Jan 17 00:46:40.787692 systemd[1]: sshd@37-10.0.0.115:22-10.0.0.1:38824.service: Deactivated successfully. Jan 17 00:46:40.793586 systemd[1]: session-38.scope: Deactivated successfully. Jan 17 00:46:40.803589 systemd-logind[1459]: Removed session 38. Jan 17 00:46:42.652406 kubelet[2596]: E0117 00:46:42.652272 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8" Jan 17 00:46:43.640319 kubelet[2596]: E0117 00:46:43.640160 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:45.989350 systemd[1]: Started sshd@38-10.0.0.115:22-10.0.0.1:41382.service - OpenSSH per-connection server daemon (10.0.0.1:41382). Jan 17 00:46:46.216240 sshd[6958]: Accepted publickey for core from 10.0.0.1 port 41382 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:46.235015 sshd[6958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:46.283282 systemd-logind[1459]: New session 39 of user core. Jan 17 00:46:46.310242 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 17 00:46:47.077152 sshd[6958]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:47.097157 systemd[1]: sshd@38-10.0.0.115:22-10.0.0.1:41382.service: Deactivated successfully. Jan 17 00:46:47.105428 systemd[1]: session-39.scope: Deactivated successfully. Jan 17 00:46:47.107295 systemd-logind[1459]: Session 39 logged out. Waiting for processes to exit. Jan 17 00:46:47.113875 systemd-logind[1459]: Removed session 39. Jan 17 00:46:47.708835 kubelet[2596]: E0117 00:46:47.705844 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:47.732723 kubelet[2596]: E0117 00:46:47.732652 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-688bc4c644-qrndd" podUID="ee6e145a-aa21-42ce-80af-75c3ba3e223d" Jan 17 00:46:48.649658 kubelet[2596]: E0117 00:46:48.646368 2596 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:48.651138 kubelet[2596]: E0117 00:46:48.650798 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-gwfnp" podUID="61ae5c95-165c-41b7-b9c1-05cec94160e8" Jan 17 00:46:48.651138 kubelet[2596]: E0117 00:46:48.650949 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n22c9" podUID="ec63a8db-6e49-4fec-8b7a-9f9042c1bf91" Jan 17 00:46:49.645842 kubelet[2596]: E0117 00:46:49.643830 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d788f98c-msd48" podUID="09a01101-a646-4d50-93a3-7a41aecfea23" Jan 17 00:46:50.653211 kubelet[2596]: E0117 00:46:50.653059 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jdngt" podUID="fa61c0c6-a39e-4c93-94a9-44f82847e39a" Jan 17 00:46:52.118887 systemd[1]: Started sshd@39-10.0.0.115:22-10.0.0.1:41386.service - OpenSSH per-connection server daemon (10.0.0.1:41386). Jan 17 00:46:52.318693 sshd[6979]: Accepted publickey for core from 10.0.0.1 port 41386 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:46:52.323891 sshd[6979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:52.359053 systemd-logind[1459]: New session 40 of user core. Jan 17 00:46:52.372970 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 17 00:46:52.901427 sshd[6979]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:52.945610 systemd[1]: sshd@39-10.0.0.115:22-10.0.0.1:41386.service: Deactivated successfully. Jan 17 00:46:52.953691 systemd[1]: session-40.scope: Deactivated successfully. Jan 17 00:46:52.955296 systemd-logind[1459]: Session 40 logged out. Waiting for processes to exit. Jan 17 00:46:52.957309 systemd-logind[1459]: Removed session 40. Jan 17 00:46:54.646471 kubelet[2596]: E0117 00:46:54.640475 2596 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-674c9b8465-rpks6" podUID="084547cb-aa8f-42ba-b949-f26ba954f5f8"