Jan 17 00:38:32.599974 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:38:32.600015 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:38:32.600038 kernel: BIOS-provided physical RAM map: Jan 17 00:38:32.600048 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:38:32.600059 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:38:32.600069 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:38:32.600082 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 17 00:38:32.600093 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 17 00:38:32.600103 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:38:32.600121 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 00:38:32.600132 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:38:32.600144 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:38:32.600199 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 00:38:32.600213 kernel: NX (Execute Disable) protection: active Jan 17 00:38:32.600224 kernel: APIC: Static calls initialized Jan 17 00:38:32.600279 kernel: SMBIOS 2.8 present. Jan 17 00:38:32.600291 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 17 00:38:32.600302 kernel: Hypervisor detected: KVM Jan 17 00:38:32.600312 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:38:32.600324 kernel: kvm-clock: using sched offset of 12571592736 cycles Jan 17 00:38:32.600334 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:38:32.600345 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:38:32.600355 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:38:32.600366 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:38:32.600383 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 17 00:38:32.600393 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:38:32.600405 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:38:32.600415 kernel: Using GB pages for direct mapping Jan 17 00:38:32.600425 kernel: ACPI: Early table checksum verification disabled Jan 17 00:38:32.600436 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 17 00:38:32.600447 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:38:32.600458 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:38:32.600468 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:38:32.600524 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 17 00:38:32.600534 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:38:32.600545 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:38:32.600556 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:38:32.600566 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:38:32.600577 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 17 00:38:32.600589 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 17 00:38:32.600606 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 17 00:38:32.600621 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 17 00:38:32.600710 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 17 00:38:32.600724 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 17 00:38:32.600783 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 17 00:38:32.600797 kernel: No NUMA configuration found Jan 17 00:38:32.600808 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 17 00:38:32.600827 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 17 00:38:32.600838 kernel: Zone ranges: Jan 17 00:38:32.600850 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:38:32.600861 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 17 00:38:32.600872 kernel: Normal empty Jan 17 00:38:32.600883 kernel: Movable zone start for each node Jan 17 00:38:32.600895 kernel: Early memory node ranges Jan 17 00:38:32.600906 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:38:32.600917 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 17 00:38:32.600929 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 17 00:38:32.600945 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:38:32.601000 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:38:32.601014 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 17 00:38:32.601025 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:38:32.601038 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:38:32.601050 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:38:32.601063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:38:32.601074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:38:32.601085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:38:32.601103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:38:32.601114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:38:32.601127 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:38:32.601139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:38:32.601152 kernel: TSC deadline timer available Jan 17 00:38:32.601165 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:38:32.601177 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:38:32.601190 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:38:32.601249 kernel: kvm-guest: setup PV sched yield Jan 17 00:38:32.601269 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 00:38:32.601280 kernel: Booting paravirtualized kernel on KVM Jan 17 00:38:32.601291 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:38:32.601302 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:38:32.601314 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:38:32.601325 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:38:32.601336 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:38:32.601347 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:38:32.601358 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:38:32.601375 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:38:32.601386 kernel: random: crng init done Jan 17 00:38:32.601398 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:38:32.603317 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:38:32.603383 kernel: Fallback order for Node 0: 0 Jan 17 00:38:32.603395 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 17 00:38:32.603406 kernel: Policy zone: DMA32 Jan 17 00:38:32.603418 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:38:32.603450 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 17 00:38:32.603463 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:38:32.603474 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:38:32.603485 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:38:32.603498 kernel: Dynamic Preempt: voluntary Jan 17 00:38:32.603509 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:38:32.603522 kernel: rcu: RCU event tracing is enabled. Jan 17 00:38:32.603535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:38:32.603547 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:38:32.603564 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:38:32.603575 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:38:32.603585 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:38:32.603596 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:38:32.603716 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:38:32.603732 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:38:32.603790 kernel: Console: colour VGA+ 80x25 Jan 17 00:38:32.603802 kernel: printk: console [ttyS0] enabled Jan 17 00:38:32.603814 kernel: ACPI: Core revision 20230628 Jan 17 00:38:32.603832 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:38:32.603845 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:38:32.603856 kernel: x2apic enabled Jan 17 00:38:32.603867 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:38:32.603878 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:38:32.603890 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:38:32.603902 kernel: kvm-guest: setup PV IPIs Jan 17 00:38:32.603914 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:38:32.603946 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:38:32.603958 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:38:32.603970 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:38:32.603982 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:38:32.603999 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:38:32.604011 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:38:32.604024 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:38:32.604037 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:38:32.604055 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:38:32.604068 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:38:32.604130 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:38:32.604146 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:38:32.604161 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:38:32.604174 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:38:32.604186 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:38:32.604196 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:38:32.604210 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:38:32.604230 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:38:32.604241 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:38:32.604253 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:38:32.604266 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:38:32.604277 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:38:32.604289 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:38:32.604301 kernel: landlock: Up and running. Jan 17 00:38:32.604312 kernel: SELinux: Initializing. Jan 17 00:38:32.604326 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:38:32.604344 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:38:32.604356 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:38:32.604368 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:38:32.604380 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:38:32.604393 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:38:32.604404 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:38:32.604416 kernel: signal: max sigframe size: 1776 Jan 17 00:38:32.604428 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:38:32.606259 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:38:32.606543 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:38:32.606884 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:38:32.606900 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:38:32.606913 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:38:32.606926 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:38:32.606937 kernel: smpboot: Max logical packages: 1 Jan 17 00:38:32.606948 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:38:32.606960 kernel: devtmpfs: initialized Jan 17 00:38:32.606971 kernel: x86/mm: Memory block size: 128MB Jan 17 00:38:32.607004 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:38:32.607018 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:38:32.607030 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:38:32.607042 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:38:32.607054 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:38:32.607069 kernel: audit: type=2000 audit(1768610306.106:1): state=initialized audit_enabled=0 res=1 Jan 17 00:38:32.607081 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:38:32.607093 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:38:32.607105 kernel: cpuidle: using governor menu Jan 17 00:38:32.607123 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:38:32.607137 kernel: dca service started, version 1.12.1 Jan 17 00:38:32.607151 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:38:32.607164 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:38:32.607177 kernel: PCI: Using configuration type 1 for base access Jan 17 00:38:32.607190 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:38:32.607204 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:38:32.607217 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:38:32.607231 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:38:32.607247 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:38:32.607266 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:38:32.607281 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:38:32.607292 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:38:32.607304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:38:32.607317 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:38:32.607330 kernel: ACPI: Interpreter enabled Jan 17 00:38:32.607343 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:38:32.607357 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:38:32.607376 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:38:32.607389 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:38:32.607402 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:38:32.607416 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:38:32.608958 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:38:32.609267 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:38:32.609499 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:38:32.609525 kernel: PCI host bridge to bus 0000:00 Jan 17 00:38:32.611883 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:38:32.612154 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:38:32.612949 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:38:32.613168 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:38:32.615873 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:38:32.616140 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 17 00:38:32.616364 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:38:32.616958 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:38:32.617311 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:38:32.617544 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 17 00:38:32.620017 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 17 00:38:32.620257 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 17 00:38:32.620483 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:38:32.622089 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:38:32.622373 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 00:38:32.622600 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 17 00:38:32.622973 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 17 00:38:32.623361 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:38:32.623718 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 00:38:32.624791 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 17 00:38:32.625033 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 17 00:38:32.625325 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:38:32.625548 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 17 00:38:32.626269 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 17 00:38:32.626492 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 17 00:38:32.626852 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 17 00:38:32.627238 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:38:32.627555 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:38:32.628025 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:38:32.628366 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 17 00:38:32.628895 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 17 00:38:32.629233 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:38:32.629507 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 00:38:32.629536 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:38:32.629550 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:38:32.629562 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:38:32.629575 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:38:32.629587 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:38:32.629600 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:38:32.629611 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:38:32.629624 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:38:32.629728 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:38:32.630053 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:38:32.630068 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:38:32.630081 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:38:32.630092 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:38:32.630104 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:38:32.630115 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:38:32.630127 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:38:32.630138 kernel: iommu: Default domain type: Translated Jan 17 00:38:32.630157 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:38:32.630169 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:38:32.630181 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:38:32.630193 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:38:32.630206 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 17 00:38:32.630429 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:38:32.630852 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:38:32.631131 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:38:32.631153 kernel: vgaarb: loaded Jan 17 00:38:32.631174 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:38:32.631186 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:38:32.631198 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:38:32.631210 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:38:32.631223 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:38:32.631235 kernel: pnp: PnP ACPI init Jan 17 00:38:32.631805 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:38:32.631830 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:38:32.631850 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:38:32.631863 kernel: NET: Registered PF_INET protocol family Jan 17 00:38:32.631876 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:38:32.631888 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:38:32.631900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:38:32.631913 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:38:32.631926 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:38:32.631938 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:38:32.631951 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:38:32.631968 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:38:32.631982 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:38:32.631995 kernel: NET: Registered PF_XDP protocol family Jan 17 00:38:32.632211 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:38:32.632416 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:38:32.632804 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:38:32.633019 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:38:32.633230 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:38:32.633493 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 17 00:38:32.633513 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:38:32.633526 kernel: Initialise system trusted keyrings Jan 17 00:38:32.633538 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:38:32.633551 kernel: Key type asymmetric registered Jan 17 00:38:32.633563 kernel: Asymmetric key parser 'x509' registered Jan 17 00:38:32.633576 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:38:32.633588 kernel: io scheduler mq-deadline registered Jan 17 00:38:32.633601 kernel: io scheduler kyber registered Jan 17 00:38:32.633620 kernel: io scheduler bfq registered Jan 17 00:38:32.633716 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:38:32.633732 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:38:32.633802 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:38:32.633814 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:38:32.633828 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:38:32.633839 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:38:32.633852 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:38:32.633865 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:38:32.633884 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:38:32.634363 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:38:32.634582 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:38:32.634600 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:38:32.634965 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:38:30 UTC (1768610310) Jan 17 00:38:32.635192 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:38:32.635213 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:38:32.635225 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:38:32.635246 kernel: Segment Routing with IPv6 Jan 17 00:38:32.635258 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:38:32.635270 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:38:32.635283 kernel: Key type dns_resolver registered Jan 17 00:38:32.635295 kernel: IPI shorthand broadcast: enabled Jan 17 00:38:32.635307 kernel: sched_clock: Marking stable (4846030882, 601167496)->(6231744829, -784546451) Jan 17 00:38:32.635320 kernel: registered taskstats version 1 Jan 17 00:38:32.635332 kernel: Loading compiled-in X.509 certificates Jan 17 00:38:32.635345 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:38:32.635363 kernel: Key type .fscrypt registered Jan 17 00:38:32.635374 kernel: Key type fscrypt-provisioning registered Jan 17 00:38:32.635388 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:38:32.635400 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:38:32.635413 kernel: ima: No architecture policies found Jan 17 00:38:32.635424 kernel: hrtimer: interrupt took 7097303 ns Jan 17 00:38:32.635437 kernel: clk: Disabling unused clocks Jan 17 00:38:32.635448 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:38:32.635462 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:38:32.635480 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:38:32.635492 kernel: Run /init as init process Jan 17 00:38:32.635505 kernel: with arguments: Jan 17 00:38:32.635517 kernel: /init Jan 17 00:38:32.635529 kernel: with environment: Jan 17 00:38:32.635541 kernel: HOME=/ Jan 17 00:38:32.635553 kernel: TERM=linux Jan 17 00:38:32.635568 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:38:32.635590 systemd[1]: Detected virtualization kvm. Jan 17 00:38:32.635604 systemd[1]: Detected architecture x86-64. Jan 17 00:38:32.635615 systemd[1]: Running in initrd. Jan 17 00:38:32.635630 systemd[1]: No hostname configured, using default hostname. Jan 17 00:38:32.637074 systemd[1]: Hostname set to . Jan 17 00:38:32.637131 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:38:32.637145 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:38:32.637194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:38:32.637218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:38:32.637312 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:38:32.637361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:38:32.637375 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:38:32.637439 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:38:32.637492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:38:32.637543 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:38:32.637555 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:38:32.637602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:38:32.637615 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:38:32.637828 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:38:32.637869 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:38:32.637889 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:38:32.637907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:38:32.637959 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:38:32.638037 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:38:32.638053 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:38:32.638067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:38:32.638080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:38:32.638095 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:38:32.638108 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:38:32.638128 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:38:32.638143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:38:32.638155 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:38:32.638169 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:38:32.638182 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:38:32.638202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:38:32.638215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:38:32.638229 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:38:32.638280 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:38:32.638321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:38:32.638337 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:38:32.638355 systemd-journald[194]: Journal started Jan 17 00:38:32.638382 systemd-journald[194]: Runtime Journal (/run/log/journal/93fc8e984439463ea3b7269e95b49aa7) is 6.0M, max 48.4M, 42.3M free. Jan 17 00:38:32.641421 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:38:32.647113 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:38:33.210833 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:38:33.211189 kernel: Bridge firewalling registered Jan 17 00:38:33.211217 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:38:32.955606 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:38:33.245947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:38:33.260598 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:38:33.287113 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:38:33.400981 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:38:33.403200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:38:33.457260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:38:33.506454 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:38:33.592060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:38:33.611179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:38:33.627932 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:38:33.653345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:38:33.706059 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:38:33.738140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:38:33.787539 dracut-cmdline[230]: dracut-dracut-053 Jan 17 00:38:33.799962 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:38:34.003569 systemd-resolved[233]: Positive Trust Anchors: Jan 17 00:38:34.003725 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:38:34.003821 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:38:34.022466 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 17 00:38:34.034445 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:38:34.118341 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:38:34.196883 kernel: SCSI subsystem initialized Jan 17 00:38:34.223580 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:38:34.257432 kernel: iscsi: registered transport (tcp) Jan 17 00:38:34.324460 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:38:34.324580 kernel: QLogic iSCSI HBA Driver Jan 17 00:38:34.509271 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:38:34.547030 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:38:34.645060 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:38:34.645389 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:38:34.659784 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:38:34.778928 kernel: raid6: avx2x4 gen() 6707 MB/s Jan 17 00:38:34.797841 kernel: raid6: avx2x2 gen() 18297 MB/s Jan 17 00:38:34.821317 kernel: raid6: avx2x1 gen() 11102 MB/s Jan 17 00:38:34.821402 kernel: raid6: using algorithm avx2x2 gen() 18297 MB/s Jan 17 00:38:34.849303 kernel: raid6: .... xor() 12427 MB/s, rmw enabled Jan 17 00:38:34.849577 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:38:34.908865 kernel: xor: automatically using best checksumming function avx Jan 17 00:38:35.705167 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:38:35.751909 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:38:35.813478 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:38:35.904372 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 17 00:38:35.925315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:38:35.987017 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:38:36.051558 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 17 00:38:36.207727 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:38:36.242345 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:38:36.513399 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:38:36.581095 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:38:36.638947 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:38:36.649993 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:38:36.650050 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:38:36.650106 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:38:36.653172 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:38:36.711731 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:38:36.762597 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:38:36.763178 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:38:36.764032 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:38:36.765047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:38:36.794147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:38:36.827918 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:38:36.830621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:38:36.838172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:38:36.895476 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:38:36.895580 kernel: GPT:9289727 != 19775487 Jan 17 00:38:36.895596 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:38:36.895611 kernel: GPT:9289727 != 19775487 Jan 17 00:38:36.850520 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:38:36.936823 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:38:36.936923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:38:36.961361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:38:37.435160 kernel: libata version 3.00 loaded. Jan 17 00:38:37.495740 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:38:38.131093 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:38:38.131235 kernel: AES CTR mode by8 optimization enabled Jan 17 00:38:38.131255 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:38:38.131946 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Jan 17 00:38:38.131965 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:38:38.131980 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (463) Jan 17 00:38:38.131995 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:38:38.132216 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:38:38.132437 kernel: scsi host0: ahci Jan 17 00:38:38.133580 kernel: scsi host1: ahci Jan 17 00:38:38.134048 kernel: scsi host2: ahci Jan 17 00:38:38.134264 kernel: scsi host3: ahci Jan 17 00:38:38.134851 kernel: scsi host4: ahci Jan 17 00:38:38.135308 kernel: scsi host5: ahci Jan 17 00:38:38.135535 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 17 00:38:38.135551 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 17 00:38:38.135566 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 17 00:38:38.135581 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 17 00:38:38.135596 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 17 00:38:38.135611 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 17 00:38:38.135626 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:38:38.136111 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:38:38.136132 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:38:38.136153 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:38:38.136167 kernel: ata3.00: applying bridge limits Jan 17 00:38:38.136182 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:38:38.136198 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:38:38.136212 kernel: ata3.00: configured for UDMA/100 Jan 17 00:38:38.136227 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:38:38.136241 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:38:38.123571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:38:38.157977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:38:38.221135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:38:38.239817 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:38:38.300164 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:38:38.300927 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:38:38.258420 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:38:38.325994 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:38:38.339282 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:38:38.393403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:38:38.393438 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:38:38.394295 disk-uuid[563]: Primary Header is updated. Jan 17 00:38:38.394295 disk-uuid[563]: Secondary Entries is updated. Jan 17 00:38:38.394295 disk-uuid[563]: Secondary Header is updated. Jan 17 00:38:38.419056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:38:38.419094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:38:38.426481 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:38:39.478270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:38:39.497911 disk-uuid[564]: The operation has completed successfully. Jan 17 00:38:39.647032 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:38:39.647312 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:38:39.723062 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:38:39.741043 sh[590]: Success Jan 17 00:38:39.835047 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:38:40.048250 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:38:40.094318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:38:40.104085 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:38:40.250063 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:38:40.250145 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:38:40.250184 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:38:40.257913 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:38:40.257948 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:38:40.296081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:38:40.306006 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:38:40.340054 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:38:40.356718 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:38:40.423922 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:38:40.423988 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:38:40.424009 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:38:40.454864 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:38:40.497488 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:38:40.510905 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:38:40.553381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:38:40.595517 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:38:41.529607 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:38:41.560420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:38:41.606957 ignition[701]: Ignition 2.19.0 Jan 17 00:38:41.607057 ignition[701]: Stage: fetch-offline Jan 17 00:38:41.607219 ignition[701]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:38:41.607243 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:38:41.607978 ignition[701]: parsed url from cmdline: "" Jan 17 00:38:41.607986 ignition[701]: no config URL provided Jan 17 00:38:41.607997 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:38:41.608013 ignition[701]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:38:41.608136 ignition[701]: op(1): [started] loading QEMU firmware config module Jan 17 00:38:41.608143 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:38:41.675185 systemd-networkd[778]: lo: Link UP Jan 17 00:38:41.675192 systemd-networkd[778]: lo: Gained carrier Jan 17 00:38:41.679204 systemd-networkd[778]: Enumeration completed Jan 17 00:38:41.681893 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:38:41.681974 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:38:41.720500 ignition[701]: op(1): [finished] loading QEMU firmware config module Jan 17 00:38:41.681980 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:38:41.720542 ignition[701]: QEMU firmware config was not found. Ignoring... Jan 17 00:38:41.694051 systemd[1]: Reached target network.target - Network. Jan 17 00:38:41.694437 systemd-networkd[778]: eth0: Link UP Jan 17 00:38:41.694443 systemd-networkd[778]: eth0: Gained carrier Jan 17 00:38:41.694489 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:38:41.727384 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:38:42.445127 ignition[701]: parsing config with SHA512: 3309ab21acd2befd497c95aea3ca16ba0e72b652ad75b5c6362f345138c81b3767b23bbb0d39f33247dfebf92260792afa7dd9601228f1548d30adac20989929 Jan 17 00:38:42.597102 unknown[701]: fetched base config from "system" Jan 17 00:38:42.597220 unknown[701]: fetched user config from "qemu" Jan 17 00:38:42.648993 ignition[701]: fetch-offline: fetch-offline passed Jan 17 00:38:42.649563 ignition[701]: Ignition finished successfully Jan 17 00:38:42.679387 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:38:42.704013 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:38:42.742172 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:38:42.955921 ignition[784]: Ignition 2.19.0 Jan 17 00:38:42.956441 ignition[784]: Stage: kargs Jan 17 00:38:42.958693 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:38:42.958712 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:38:42.963547 ignition[784]: kargs: kargs passed Jan 17 00:38:42.963982 ignition[784]: Ignition finished successfully Jan 17 00:38:43.013160 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:38:43.048563 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:38:43.319461 ignition[793]: Ignition 2.19.0 Jan 17 00:38:43.319804 ignition[793]: Stage: disks Jan 17 00:38:43.320214 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:38:43.320240 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:38:43.346552 ignition[793]: disks: disks passed Jan 17 00:38:43.354020 ignition[793]: Ignition finished successfully Jan 17 00:38:43.367202 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:38:43.378951 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:38:43.388558 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:38:43.399914 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:38:43.409452 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:38:43.414465 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:38:43.446065 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:38:43.507261 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:38:43.522169 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:38:43.530719 systemd-networkd[778]: eth0: Gained IPv6LL Jan 17 00:38:43.605539 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:38:44.497146 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:38:44.507217 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:38:44.547352 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:38:44.628758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:38:44.731084 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:38:44.745908 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:38:44.746065 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:38:44.746104 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:38:44.943159 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Jan 17 00:38:44.950201 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:38:44.998136 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:38:44.998167 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:38:44.998184 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:38:45.008185 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:38:45.029433 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:38:45.035201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:38:45.236415 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:38:45.313611 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:38:45.346181 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:38:45.373014 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:38:45.829467 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:38:45.873564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:38:45.900321 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:38:45.932200 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:38:45.952564 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:38:46.076400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:38:46.227620 ignition[924]: INFO : Ignition 2.19.0 Jan 17 00:38:46.235231 ignition[924]: INFO : Stage: mount Jan 17 00:38:46.235231 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:38:46.235231 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:38:46.235231 ignition[924]: INFO : mount: mount passed Jan 17 00:38:46.235231 ignition[924]: INFO : Ignition finished successfully Jan 17 00:38:46.253113 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:38:46.294083 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:38:46.435484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:38:46.478838 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Jan 17 00:38:46.489861 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:38:46.489930 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:38:46.501123 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:38:46.540722 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:38:46.551069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:38:46.683129 ignition[954]: INFO : Ignition 2.19.0 Jan 17 00:38:46.683129 ignition[954]: INFO : Stage: files Jan 17 00:38:46.699272 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:38:46.699272 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:38:46.699272 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:38:46.748507 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:38:46.748507 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:38:46.793894 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:38:46.816501 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:38:46.883415 unknown[954]: wrote ssh authorized keys file for user: core Jan 17 00:38:46.905490 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:38:46.950579 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:38:46.950579 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:38:46.950579 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:38:47.081500 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:38:47.255456 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:38:48.116183 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:38:48.116183 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:38:48.116183 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:38:48.116183 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:38:48.184269 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:38:48.743456 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:38:51.854589 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:38:51.854589 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 17 00:38:51.893743 ignition[954]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:38:52.057177 ignition[954]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:38:52.090057 ignition[954]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:38:52.099289 ignition[954]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:38:52.099289 ignition[954]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:38:52.099289 ignition[954]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:38:52.099289 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:38:52.099289 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:38:52.099289 ignition[954]: INFO : files: files passed Jan 17 00:38:52.099289 ignition[954]: INFO : Ignition finished successfully Jan 17 00:38:52.177614 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:38:52.245241 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:38:52.259036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:38:52.259752 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:38:52.260229 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:38:52.326953 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:38:52.347982 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:38:52.347982 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:38:52.378113 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:38:52.395833 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:38:52.404828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:38:52.442397 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:38:52.556389 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:38:52.556617 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:38:52.573002 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:38:52.587194 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:38:52.602105 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:38:52.642120 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:38:52.689473 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:38:52.725121 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:38:52.792265 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:38:52.808127 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:38:52.827064 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:38:52.838292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:38:52.838537 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:38:52.853552 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:38:52.867129 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:38:52.903114 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:38:52.940349 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:38:52.967236 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:38:53.023273 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:38:53.087431 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:38:53.156347 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:38:53.167170 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:38:53.183320 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:38:53.193318 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:38:53.194988 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:38:53.240949 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:38:53.241275 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:38:53.265597 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:38:53.280443 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:38:53.321350 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:38:53.321617 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:38:53.336411 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:38:53.336722 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:38:53.348131 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:38:53.416365 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:38:53.425020 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:38:53.454986 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:38:53.471910 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:38:53.489935 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:38:53.490756 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:38:53.496444 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:38:53.496572 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:38:53.496886 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:38:53.497039 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:38:53.497366 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:38:53.497545 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:38:53.568544 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:38:53.701870 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:38:53.704765 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:38:53.747557 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:38:53.748050 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:38:53.748567 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:38:53.804611 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:38:53.805524 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:38:53.846492 ignition[1009]: INFO : Ignition 2.19.0 Jan 17 00:38:53.846492 ignition[1009]: INFO : Stage: umount Jan 17 00:38:53.846492 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:38:53.846492 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:38:53.846492 ignition[1009]: INFO : umount: umount passed Jan 17 00:38:53.846492 ignition[1009]: INFO : Ignition finished successfully Jan 17 00:38:53.873616 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:38:53.876541 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:38:53.890724 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:38:53.897103 systemd[1]: Stopped target network.target - Network. Jan 17 00:38:53.914128 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:38:53.914261 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:38:53.923746 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:38:53.923922 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:38:53.932297 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:38:53.932414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:38:53.949584 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:38:53.950762 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:38:53.972359 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:38:53.982223 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:38:53.990964 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:38:53.991162 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:38:54.010191 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:38:54.010413 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:38:54.025248 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:38:54.025408 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:38:54.030902 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 17 00:38:54.047118 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:38:54.047388 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:38:54.069210 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:38:54.069751 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:38:54.101016 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:38:54.101153 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:38:54.181907 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:38:54.226410 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:38:54.226534 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:38:54.238215 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:38:54.238304 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:38:54.244883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:38:54.244972 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:38:54.292705 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:38:54.292884 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:38:54.301184 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:38:54.365056 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:38:54.366379 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:38:54.392322 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:38:54.393070 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:38:54.404015 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:38:54.404083 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:38:54.420129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:38:54.420227 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:38:54.522158 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:38:54.522270 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:38:54.527904 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:38:54.528005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:38:54.607346 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:38:54.685035 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:38:54.685975 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:38:54.723149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:38:54.724252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:38:54.786063 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:38:54.787879 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:38:54.820320 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:38:54.821701 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:38:54.853355 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:38:54.895123 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:38:54.982427 systemd[1]: Switching root. Jan 17 00:38:55.067884 systemd-journald[194]: Journal stopped Jan 17 00:38:59.501719 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:38:59.501892 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:38:59.501926 kernel: SELinux: policy capability open_perms=1 Jan 17 00:38:59.501945 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:38:59.501965 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:38:59.501985 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:38:59.502013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:38:59.502039 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:38:59.502065 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:38:59.502093 kernel: audit: type=1403 audit(1768610335.743:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:38:59.502112 systemd[1]: Successfully loaded SELinux policy in 133.899ms. Jan 17 00:38:59.502153 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.534ms. Jan 17 00:38:59.502184 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:38:59.502204 systemd[1]: Detected virtualization kvm. Jan 17 00:38:59.502225 systemd[1]: Detected architecture x86-64. Jan 17 00:38:59.502245 systemd[1]: Detected first boot. Jan 17 00:38:59.502269 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:38:59.502289 zram_generator::config[1076]: No configuration found. Jan 17 00:38:59.502310 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:38:59.502330 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:38:59.502349 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:38:59.502368 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:38:59.502389 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:38:59.502408 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:38:59.502434 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:38:59.502456 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:38:59.502474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:38:59.502494 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:38:59.502513 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:38:59.502532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:38:59.502551 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:38:59.502570 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:38:59.502593 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:38:59.502619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:38:59.502738 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:38:59.502765 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:38:59.502784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:38:59.502861 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:38:59.502882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:38:59.502902 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:38:59.502920 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:38:59.502947 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:38:59.502966 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:38:59.502985 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:38:59.503002 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:38:59.503019 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:38:59.503037 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:38:59.503057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:38:59.503121 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:38:59.503203 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:38:59.503228 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:38:59.503245 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:38:59.503262 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:38:59.503282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:59.503299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:38:59.503317 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:38:59.503334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:38:59.503350 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:38:59.503369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:38:59.503394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:38:59.503413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:38:59.503429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:38:59.503448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:38:59.503466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:38:59.503485 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:38:59.503501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:38:59.503519 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:38:59.503542 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:38:59.503559 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:38:59.503576 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:38:59.503595 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:38:59.503614 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:38:59.503733 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:38:59.503763 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:38:59.503789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:59.503880 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:38:59.503905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:38:59.503922 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:38:59.503969 systemd-journald[1176]: Collecting audit messages is disabled. Jan 17 00:38:59.503998 kernel: ACPI: bus type drm_connector registered Jan 17 00:38:59.504016 systemd-journald[1176]: Journal started Jan 17 00:38:59.504050 systemd-journald[1176]: Runtime Journal (/run/log/journal/93fc8e984439463ea3b7269e95b49aa7) is 6.0M, max 48.4M, 42.3M free. Jan 17 00:38:59.527847 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:38:59.532934 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:38:59.607300 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:38:59.631750 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:38:59.660171 kernel: loop: module loaded Jan 17 00:38:59.682376 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:38:59.689040 kernel: fuse: init (API version 7.39) Jan 17 00:38:59.690092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:38:59.702585 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:38:59.704313 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:38:59.710530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:38:59.717320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:38:59.726251 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:38:59.726530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:38:59.733530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:38:59.735321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:38:59.746603 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:38:59.747082 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:38:59.755494 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:38:59.756599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:38:59.772430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:38:59.780991 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:38:59.793502 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:38:59.837342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:38:59.846893 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:38:59.875057 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:38:59.887928 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:38:59.915260 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:38:59.992137 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:39:00.071302 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:39:00.086047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:39:00.097002 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:39:00.115221 systemd-journald[1176]: Time spent on flushing to /var/log/journal/93fc8e984439463ea3b7269e95b49aa7 is 60.196ms for 930 entries. Jan 17 00:39:00.115221 systemd-journald[1176]: System Journal (/var/log/journal/93fc8e984439463ea3b7269e95b49aa7) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:39:00.290146 systemd-journald[1176]: Received client request to flush runtime journal. Jan 17 00:39:00.122221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:39:00.142221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:39:00.200051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:39:00.221978 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:39:00.243049 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:39:00.253618 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:39:00.274511 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:39:00.296940 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:39:00.314589 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:39:00.322462 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 17 00:39:00.322487 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 17 00:39:00.335442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:39:00.351981 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:39:00.383019 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:39:00.393289 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:39:00.631106 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:39:00.656171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:39:00.843427 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 17 00:39:00.843514 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 17 00:39:01.083867 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:39:02.312767 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:39:02.375966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:39:02.451015 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Jan 17 00:39:02.644204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:39:02.718058 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:39:02.810059 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:39:03.092137 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:39:03.213852 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:39:03.413031 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1247) Jan 17 00:39:03.846091 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:39:03.899009 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:39:04.211282 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:39:04.222524 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:39:04.223153 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:39:04.351736 systemd-networkd[1250]: lo: Link UP Jan 17 00:39:04.352568 systemd-networkd[1250]: lo: Gained carrier Jan 17 00:39:04.358288 systemd-networkd[1250]: Enumeration completed Jan 17 00:39:04.360291 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:39:04.366179 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:39:04.366186 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:39:04.384219 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:39:04.384271 systemd-networkd[1250]: eth0: Link UP Jan 17 00:39:04.384277 systemd-networkd[1250]: eth0: Gained carrier Jan 17 00:39:04.384291 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:39:04.426780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:39:04.460214 systemd-networkd[1250]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:39:04.655171 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:39:04.617375 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:39:04.782971 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:39:04.935323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:39:05.848290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:39:05.870775 systemd-networkd[1250]: eth0: Gained IPv6LL Jan 17 00:39:05.923089 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:39:06.013992 kernel: kvm_amd: TSC scaling supported Jan 17 00:39:06.014114 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:39:06.014140 kernel: kvm_amd: Nested Paging enabled Jan 17 00:39:06.016201 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:39:06.018932 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:39:06.769592 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:39:06.846480 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:39:06.905463 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:39:06.956605 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:39:07.069010 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:39:07.086877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:39:07.110042 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:39:07.143458 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:39:07.359942 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:39:07.376921 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:39:07.396469 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:39:07.400132 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:39:07.414074 systemd[1]: Reached target machines.target - Containers. Jan 17 00:39:07.431266 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:39:07.498807 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:39:07.512108 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:39:07.551503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:39:07.566220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:39:07.599903 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:39:07.635263 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:39:07.637251 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:39:07.665961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:39:07.710312 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:39:07.730500 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:39:07.732030 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:39:07.793443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:39:07.880164 kernel: loop1: detected capacity change from 0 to 224512 Jan 17 00:39:08.023252 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:39:08.283299 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:39:08.467718 kernel: loop4: detected capacity change from 0 to 224512 Jan 17 00:39:08.590280 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:39:08.702098 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:39:08.704029 (sd-merge)[1315]: Merged extensions into '/usr'. Jan 17 00:39:08.797357 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:39:08.797432 systemd[1]: Reloading... Jan 17 00:39:09.012884 zram_generator::config[1342]: No configuration found. Jan 17 00:39:09.445039 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:39:09.599173 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:39:09.626954 systemd[1]: Reloading finished in 828 ms. Jan 17 00:39:09.655762 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:39:09.671756 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:39:09.699535 systemd[1]: Starting ensure-sysext.service... Jan 17 00:39:09.710219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:39:09.722583 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:39:09.722732 systemd[1]: Reloading... Jan 17 00:39:09.899001 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:39:09.902343 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:39:09.905156 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:39:09.905998 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Jan 17 00:39:09.906306 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Jan 17 00:39:09.951130 zram_generator::config[1415]: No configuration found. Jan 17 00:39:09.953034 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:39:09.953315 systemd-tmpfiles[1388]: Skipping /boot Jan 17 00:39:10.016350 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:39:10.016413 systemd-tmpfiles[1388]: Skipping /boot Jan 17 00:39:10.304154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:39:10.456600 systemd[1]: Reloading finished in 733 ms. Jan 17 00:39:10.505213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:39:10.560079 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:39:10.637064 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:39:10.652154 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:39:10.695555 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:39:10.718742 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:39:10.732272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:10.733204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:39:10.739121 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:39:10.751355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:39:10.768250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:39:10.777104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:39:10.777567 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:10.780583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:39:10.781234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:39:10.793374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:39:10.794073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:39:10.824309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:10.825085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:39:10.836355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:39:10.851791 augenrules[1489]: No rules Jan 17 00:39:10.858542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:39:10.871310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:39:10.873057 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:10.891997 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:39:10.902495 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:39:10.913624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:39:10.930451 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:39:10.932247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:39:10.941545 systemd-resolved[1470]: Positive Trust Anchors: Jan 17 00:39:10.941618 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:39:10.941756 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:39:10.943804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:39:10.944230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:39:10.952138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:39:10.952540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:39:10.955440 systemd-resolved[1470]: Defaulting to hostname 'linux'. Jan 17 00:39:10.967045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:39:10.978986 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:39:11.008572 systemd[1]: Reached target network.target - Network. Jan 17 00:39:11.014517 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:39:11.020958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:39:11.030340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:11.030784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:39:11.044155 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:39:11.054182 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:39:11.065954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:39:11.091370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:39:11.102195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:39:11.110429 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:39:11.118930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:39:11.119207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:11.122554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:39:11.124464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:39:11.139403 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:39:11.150225 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:39:11.158350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:39:11.158809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:39:11.173734 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:39:11.174186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:39:11.193404 systemd[1]: Finished ensure-sysext.service. Jan 17 00:39:11.201770 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:39:11.202008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:39:11.216531 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:39:11.224515 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:39:11.360787 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:39:12.358335 systemd-resolved[1470]: Clock change detected. Flushing caches. Jan 17 00:39:12.358572 systemd-timesyncd[1526]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:39:12.358774 systemd-timesyncd[1526]: Initial clock synchronization to Sat 2026-01-17 00:39:12.358161 UTC. Jan 17 00:39:12.375507 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:39:12.389682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:39:12.395577 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:39:12.402693 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:39:12.410615 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:39:12.410722 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:39:12.417721 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:39:12.427925 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:39:12.433642 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:39:12.440543 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:39:12.455028 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:39:12.470429 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:39:12.480727 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:39:12.490834 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:39:12.496668 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:39:12.501748 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:39:12.511106 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:39:12.511168 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:39:12.511434 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:39:12.516337 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:39:12.529991 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:39:12.543641 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:39:12.563907 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:39:12.583756 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:39:12.594751 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:39:12.597060 jq[1535]: false Jan 17 00:39:12.600465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:12.622729 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:39:12.634095 dbus-daemon[1534]: [system] SELinux support is enabled Jan 17 00:39:12.636015 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:39:12.646841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:39:12.656112 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:39:12.673516 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:39:12.678914 extend-filesystems[1538]: Found loop3 Jan 17 00:39:12.678914 extend-filesystems[1538]: Found loop4 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found loop5 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found sr0 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda1 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda2 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda3 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found usr Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda4 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda6 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda7 Jan 17 00:39:12.698830 extend-filesystems[1538]: Found vda9 Jan 17 00:39:12.698830 extend-filesystems[1538]: Checking size of /dev/vda9 Jan 17 00:39:12.830024 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:39:12.830107 extend-filesystems[1538]: Resized partition /dev/vda9 Jan 17 00:39:12.720918 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:39:12.838959 extend-filesystems[1564]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:39:12.735087 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:39:12.742750 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:39:12.792793 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:39:12.811171 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:39:12.860520 jq[1569]: true Jan 17 00:39:12.840977 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:39:12.841636 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:39:12.854871 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:39:12.855736 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:39:12.864703 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:39:12.881666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:39:12.882121 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:39:12.893039 systemd-logind[1555]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:39:12.893124 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:39:12.903583 systemd-logind[1555]: New seat seat0. Jan 17 00:39:12.906953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1573) Jan 17 00:39:12.910174 update_engine[1567]: I20260117 00:39:12.910068 1567 main.cc:92] Flatcar Update Engine starting Jan 17 00:39:12.918890 update_engine[1567]: I20260117 00:39:12.918777 1567 update_check_scheduler.cc:74] Next update check in 6m0s Jan 17 00:39:12.926954 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:39:12.979064 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:39:12.983458 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:39:12.983458 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:39:12.983458 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:39:12.987171 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:39:12.998556 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Jan 17 00:39:12.987786 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:39:13.018104 jq[1587]: true Jan 17 00:39:13.009998 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:39:13.013525 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:39:13.013843 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:39:13.043352 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:39:13.051342 tar[1583]: linux-amd64/LICENSE Jan 17 00:39:13.052333 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:39:13.052710 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:39:13.052919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:39:13.054592 tar[1583]: linux-amd64/helm Jan 17 00:39:13.062476 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:39:13.062681 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:39:13.073035 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:39:13.087660 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:39:13.134898 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:39:13.146694 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:39:13.164739 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:39:13.245752 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:39:13.274132 sshd_keygen[1585]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:39:13.370147 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:39:13.881956 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:39:13.981546 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:39:13.982571 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:39:14.041287 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:39:14.282072 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:39:14.335957 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:39:14.365265 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:39:14.385325 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:39:14.647580 containerd[1588]: time="2026-01-17T00:39:14.646821081Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:39:14.856591 containerd[1588]: time="2026-01-17T00:39:14.849480847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:14.868838 containerd[1588]: time="2026-01-17T00:39:14.868773508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.868985744Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.869083868Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.869935508Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.869963240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.870163574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.870284359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.870867729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.870891413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.870957857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:14.871251 containerd[1588]: time="2026-01-17T00:39:14.870975200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:14.877836 containerd[1588]: time="2026-01-17T00:39:14.875984984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:14.877836 containerd[1588]: time="2026-01-17T00:39:14.876773927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:14.877836 containerd[1588]: time="2026-01-17T00:39:14.877080079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:14.877836 containerd[1588]: time="2026-01-17T00:39:14.877110606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:39:14.877836 containerd[1588]: time="2026-01-17T00:39:14.877554846Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:39:14.877836 containerd[1588]: time="2026-01-17T00:39:14.877710877Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:39:14.909842 containerd[1588]: time="2026-01-17T00:39:14.908086329Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:39:14.909842 containerd[1588]: time="2026-01-17T00:39:14.908740772Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:39:14.909842 containerd[1588]: time="2026-01-17T00:39:14.908766820Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:39:14.909842 containerd[1588]: time="2026-01-17T00:39:14.908794421Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:39:14.909842 containerd[1588]: time="2026-01-17T00:39:14.908821993Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:39:14.909842 containerd[1588]: time="2026-01-17T00:39:14.909061831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911018785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911560807Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911602635Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911631809Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911663889Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911746013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911817155Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911852932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911883359Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911913385Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911941227Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.911969480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.912094052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.913819 containerd[1588]: time="2026-01-17T00:39:14.912129850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912155387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912288626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912637998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912681891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912701918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912726043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912748365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912774744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912804119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912825308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912845005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.912869902Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.913041231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.913065728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.914560 containerd[1588]: time="2026-01-17T00:39:14.913081857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913299124Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913330993Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913350740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913424507Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913443463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913464862Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913486433Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:39:14.916168 containerd[1588]: time="2026-01-17T00:39:14.913501401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:39:14.916632 containerd[1588]: time="2026-01-17T00:39:14.914297417Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:39:14.916632 containerd[1588]: time="2026-01-17T00:39:14.916512774Z" level=info msg="Connect containerd service" Jan 17 00:39:14.916632 containerd[1588]: time="2026-01-17T00:39:14.916575932Z" level=info msg="using legacy CRI server" Jan 17 00:39:14.916632 containerd[1588]: time="2026-01-17T00:39:14.916593625Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:39:14.917982 containerd[1588]: time="2026-01-17T00:39:14.916717837Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:39:14.943366 containerd[1588]: time="2026-01-17T00:39:14.943279812Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:39:14.943932 containerd[1588]: time="2026-01-17T00:39:14.943685047Z" level=info msg="Start subscribing containerd event" Jan 17 00:39:14.943932 containerd[1588]: time="2026-01-17T00:39:14.943741091Z" level=info msg="Start recovering state" Jan 17 00:39:14.943932 containerd[1588]: time="2026-01-17T00:39:14.943862891Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:39:14.944049 containerd[1588]: time="2026-01-17T00:39:14.943961404Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:39:14.946271 containerd[1588]: time="2026-01-17T00:39:14.946034676Z" level=info msg="Start event monitor" Jan 17 00:39:14.946360 containerd[1588]: time="2026-01-17T00:39:14.946346328Z" level=info msg="Start snapshots syncer" Jan 17 00:39:14.946457 containerd[1588]: time="2026-01-17T00:39:14.946368139Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:39:14.946457 containerd[1588]: time="2026-01-17T00:39:14.946445704Z" level=info msg="Start streaming server" Jan 17 00:39:14.955105 containerd[1588]: time="2026-01-17T00:39:14.948140949Z" level=info msg="containerd successfully booted in 0.309027s" Jan 17 00:39:14.947812 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:39:16.119746 tar[1583]: linux-amd64/README.md Jan 17 00:39:16.166581 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:39:16.313582 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:39:16.357807 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:45496.service - OpenSSH per-connection server daemon (10.0.0.1:45496). Jan 17 00:39:16.746146 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 45496 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:16.760867 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:16.790807 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:39:16.823771 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:39:16.838467 systemd-logind[1555]: New session 1 of user core. Jan 17 00:39:17.049318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:39:17.097000 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:39:17.144542 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:39:17.667775 systemd[1678]: Queued start job for default target default.target. Jan 17 00:39:17.669057 systemd[1678]: Created slice app.slice - User Application Slice. Jan 17 00:39:17.670242 systemd[1678]: Reached target paths.target - Paths. Jan 17 00:39:17.671130 systemd[1678]: Reached target timers.target - Timers. Jan 17 00:39:17.683625 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:39:17.752650 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:39:17.752761 systemd[1678]: Reached target sockets.target - Sockets. Jan 17 00:39:17.752782 systemd[1678]: Reached target basic.target - Basic System. Jan 17 00:39:17.752863 systemd[1678]: Reached target default.target - Main User Target. Jan 17 00:39:17.752921 systemd[1678]: Startup finished in 553ms. Jan 17 00:39:17.756747 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:39:17.787657 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:39:17.896715 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:45512.service - OpenSSH per-connection server daemon (10.0.0.1:45512). Jan 17 00:39:18.174672 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 45512 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:18.190956 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:18.221624 systemd-logind[1555]: New session 2 of user core. Jan 17 00:39:18.237946 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:39:18.405884 sshd[1690]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:18.427062 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:45514.service - OpenSSH per-connection server daemon (10.0.0.1:45514). Jan 17 00:39:18.448659 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:45512.service: Deactivated successfully. Jan 17 00:39:18.459343 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:39:18.470540 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:39:18.491159 systemd-logind[1555]: Removed session 2. Jan 17 00:39:18.496556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:18.513798 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:39:18.532779 systemd[1]: Startup finished in 29.819s (kernel) + 21.924s (userspace) = 51.744s. Jan 17 00:39:18.540109 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 45514 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:18.538946 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:18.580348 systemd-logind[1555]: New session 3 of user core. Jan 17 00:39:18.590644 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:39:18.687145 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:18.721341 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:18.881706 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:45514.service: Deactivated successfully. Jan 17 00:39:18.904139 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:39:18.905019 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:39:18.916797 systemd-logind[1555]: Removed session 3. Jan 17 00:39:22.414639 kubelet[1707]: E0117 00:39:22.411647 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:22.426805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:22.435532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:28.734756 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:33126.service - OpenSSH per-connection server daemon (10.0.0.1:33126). Jan 17 00:39:28.931167 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 33126 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:28.937486 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:28.981981 systemd-logind[1555]: New session 4 of user core. Jan 17 00:39:28.997570 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:39:29.091031 sshd[1727]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:29.110749 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:33142.service - OpenSSH per-connection server daemon (10.0.0.1:33142). Jan 17 00:39:29.111959 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:33126.service: Deactivated successfully. Jan 17 00:39:29.117626 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:39:29.119543 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:39:29.121623 systemd-logind[1555]: Removed session 4. Jan 17 00:39:29.212274 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:29.216306 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:29.243671 systemd-logind[1555]: New session 5 of user core. Jan 17 00:39:29.254766 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:39:29.378730 sshd[1732]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:29.397678 systemd[1]: Started sshd@5-10.0.0.123:22-10.0.0.1:33150.service - OpenSSH per-connection server daemon (10.0.0.1:33150). Jan 17 00:39:29.403158 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:33142.service: Deactivated successfully. Jan 17 00:39:29.422714 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:39:29.431094 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:39:29.439151 systemd-logind[1555]: Removed session 5. Jan 17 00:39:29.467100 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 33150 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:29.470550 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:29.483946 systemd-logind[1555]: New session 6 of user core. Jan 17 00:39:29.494683 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:39:29.573600 sshd[1740]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:29.593138 systemd[1]: Started sshd@6-10.0.0.123:22-10.0.0.1:33160.service - OpenSSH per-connection server daemon (10.0.0.1:33160). Jan 17 00:39:29.596394 systemd[1]: sshd@5-10.0.0.123:22-10.0.0.1:33150.service: Deactivated successfully. Jan 17 00:39:29.608841 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:39:29.616371 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:39:29.626854 systemd-logind[1555]: Removed session 6. Jan 17 00:39:29.689839 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 33160 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:29.691943 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:29.768909 systemd-logind[1555]: New session 7 of user core. Jan 17 00:39:29.784900 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:39:29.915887 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:39:29.916723 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:29.961877 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 17 00:39:29.968854 sshd[1748]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:29.998793 systemd[1]: Started sshd@7-10.0.0.123:22-10.0.0.1:33166.service - OpenSSH per-connection server daemon (10.0.0.1:33166). Jan 17 00:39:29.999725 systemd[1]: sshd@6-10.0.0.123:22-10.0.0.1:33160.service: Deactivated successfully. Jan 17 00:39:30.022750 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:39:30.022992 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:39:30.030320 systemd-logind[1555]: Removed session 7. Jan 17 00:39:30.121069 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 33166 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:30.126497 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:30.148057 systemd-logind[1555]: New session 8 of user core. Jan 17 00:39:30.156702 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:39:30.245062 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:39:30.247054 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:30.266815 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 17 00:39:30.290003 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:39:30.291091 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:30.341690 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:39:30.372442 auditctl[1768]: No rules Jan 17 00:39:30.378341 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:39:30.378867 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:39:30.396070 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:39:30.504988 augenrules[1787]: No rules Jan 17 00:39:30.507978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:39:30.511465 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 17 00:39:30.515860 sshd[1757]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:30.525499 systemd[1]: Started sshd@8-10.0.0.123:22-10.0.0.1:33180.service - OpenSSH per-connection server daemon (10.0.0.1:33180). Jan 17 00:39:30.526141 systemd[1]: sshd@7-10.0.0.123:22-10.0.0.1:33166.service: Deactivated successfully. Jan 17 00:39:30.532101 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:39:30.542551 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:39:30.547517 systemd-logind[1555]: Removed session 8. Jan 17 00:39:30.599385 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 33180 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:39:30.602005 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:30.820038 systemd-logind[1555]: New session 9 of user core. Jan 17 00:39:30.894954 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:39:31.033022 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:39:31.034175 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:32.607377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:39:32.627734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:34.532307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:34.563618 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:34.745720 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:39:34.758281 (dockerd)[1838]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:39:34.826910 kubelet[1829]: E0117 00:39:34.826677 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:34.857813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:34.864819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:37.572501 dockerd[1838]: time="2026-01-17T00:39:37.571937681Z" level=info msg="Starting up" Jan 17 00:39:38.464951 dockerd[1838]: time="2026-01-17T00:39:38.464538564Z" level=info msg="Loading containers: start." Jan 17 00:39:39.001589 kernel: Initializing XFRM netlink socket Jan 17 00:39:39.468740 systemd-networkd[1250]: docker0: Link UP Jan 17 00:39:39.518403 dockerd[1838]: time="2026-01-17T00:39:39.518119520Z" level=info msg="Loading containers: done." Jan 17 00:39:39.645123 dockerd[1838]: time="2026-01-17T00:39:39.644917145Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:39:39.645519 dockerd[1838]: time="2026-01-17T00:39:39.645134892Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:39:39.645519 dockerd[1838]: time="2026-01-17T00:39:39.645491017Z" level=info msg="Daemon has completed initialization" Jan 17 00:39:39.811155 dockerd[1838]: time="2026-01-17T00:39:39.806346201Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:39:39.813270 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:39:42.915481 containerd[1588]: time="2026-01-17T00:39:42.913534645Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:39:44.471715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287149642.mount: Deactivated successfully. Jan 17 00:39:45.096046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:39:45.111917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:45.836008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:45.889440 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:46.329847 kubelet[2014]: E0117 00:39:46.325162 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:46.337494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:46.338044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:50.925911 containerd[1588]: time="2026-01-17T00:39:50.925163856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:50.936035 containerd[1588]: time="2026-01-17T00:39:50.932747704Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 17 00:39:50.936035 containerd[1588]: time="2026-01-17T00:39:50.933666949Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:50.946272 containerd[1588]: time="2026-01-17T00:39:50.945957860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:50.950581 containerd[1588]: time="2026-01-17T00:39:50.950480123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 8.036892059s" Jan 17 00:39:50.950581 containerd[1588]: time="2026-01-17T00:39:50.950541612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:39:50.955145 containerd[1588]: time="2026-01-17T00:39:50.954998143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:39:56.651771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:39:56.681639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:57.129523 containerd[1588]: time="2026-01-17T00:39:57.127374935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:57.131831 containerd[1588]: time="2026-01-17T00:39:57.130161204Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 17 00:39:57.133662 containerd[1588]: time="2026-01-17T00:39:57.133587241Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:57.187573 containerd[1588]: time="2026-01-17T00:39:57.176630925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:57.187573 containerd[1588]: time="2026-01-17T00:39:57.179570867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 6.224536623s" Jan 17 00:39:57.187573 containerd[1588]: time="2026-01-17T00:39:57.179618568Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:39:57.192465 containerd[1588]: time="2026-01-17T00:39:57.192042645Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:39:57.819681 update_engine[1567]: I20260117 00:39:57.812278 1567 update_attempter.cc:509] Updating boot flags... Jan 17 00:39:57.872127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:57.898480 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:57.998314 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2096) Jan 17 00:39:58.315467 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2094) Jan 17 00:39:58.401784 kubelet[2083]: E0117 00:39:58.401710 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:58.412490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:58.413537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:40:02.617980 containerd[1588]: time="2026-01-17T00:40:02.617352891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:02.628027 containerd[1588]: time="2026-01-17T00:40:02.627581736Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 17 00:40:02.678326 containerd[1588]: time="2026-01-17T00:40:02.676930300Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:02.698717 containerd[1588]: time="2026-01-17T00:40:02.697089927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:02.701463 containerd[1588]: time="2026-01-17T00:40:02.699497894Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 5.507392389s" Jan 17 00:40:02.701463 containerd[1588]: time="2026-01-17T00:40:02.699847646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:40:02.708790 containerd[1588]: time="2026-01-17T00:40:02.708718043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:40:05.971969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853155538.mount: Deactivated successfully. Jan 17 00:40:08.003530 containerd[1588]: time="2026-01-17T00:40:08.002408817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:08.006304 containerd[1588]: time="2026-01-17T00:40:08.006084802Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:40:08.008092 containerd[1588]: time="2026-01-17T00:40:08.008046640Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:08.016241 containerd[1588]: time="2026-01-17T00:40:08.015877912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:08.017102 containerd[1588]: time="2026-01-17T00:40:08.017020988Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 5.308222662s" Jan 17 00:40:08.017102 containerd[1588]: time="2026-01-17T00:40:08.017093014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:40:08.020985 containerd[1588]: time="2026-01-17T00:40:08.020905642Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:40:08.588393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:40:08.606087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:08.700137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045459698.mount: Deactivated successfully. Jan 17 00:40:09.001921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:09.018034 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:40:09.099676 kubelet[2134]: E0117 00:40:09.099394 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:40:09.105253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:40:09.105681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:40:11.370704 containerd[1588]: time="2026-01-17T00:40:11.370426568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:11.375749 containerd[1588]: time="2026-01-17T00:40:11.375035475Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 17 00:40:11.377611 containerd[1588]: time="2026-01-17T00:40:11.377503221Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:11.386253 containerd[1588]: time="2026-01-17T00:40:11.386051642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:11.389473 containerd[1588]: time="2026-01-17T00:40:11.389145148Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.368063826s" Jan 17 00:40:11.389473 containerd[1588]: time="2026-01-17T00:40:11.389318626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:40:11.394510 containerd[1588]: time="2026-01-17T00:40:11.392643054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:40:11.976622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565544840.mount: Deactivated successfully. Jan 17 00:40:11.997443 containerd[1588]: time="2026-01-17T00:40:11.997278940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:12.004116 containerd[1588]: time="2026-01-17T00:40:12.003260316Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:40:12.007614 containerd[1588]: time="2026-01-17T00:40:12.007499482Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:12.017353 containerd[1588]: time="2026-01-17T00:40:12.017251589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:12.026235 containerd[1588]: time="2026-01-17T00:40:12.025533808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 631.423832ms" Jan 17 00:40:12.026235 containerd[1588]: time="2026-01-17T00:40:12.025654545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:40:12.033019 containerd[1588]: time="2026-01-17T00:40:12.031880550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:40:12.748777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733630649.mount: Deactivated successfully. Jan 17 00:40:22.248384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:40:22.612005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:23.228692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:23.246286 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:40:23.404921 kubelet[2260]: E0117 00:40:23.404034 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:40:23.411080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:40:23.411483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:40:33.596058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 00:40:33.624611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:34.574264 containerd[1588]: time="2026-01-17T00:40:34.570475492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:34.585648 containerd[1588]: time="2026-01-17T00:40:34.585437364Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 17 00:40:34.594949 containerd[1588]: time="2026-01-17T00:40:34.590631551Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:34.823414 containerd[1588]: time="2026-01-17T00:40:34.820285777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:34.872842 containerd[1588]: time="2026-01-17T00:40:34.872276111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 22.840296694s" Jan 17 00:40:34.884540 containerd[1588]: time="2026-01-17T00:40:34.876941788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:40:35.059411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:35.113425 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:40:35.797557 kubelet[2294]: E0117 00:40:35.794144 2294 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:40:35.809651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:40:35.810611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:40:41.496484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:41.509418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:41.608425 systemd[1]: Reloading requested from client PID 2327 ('systemctl') (unit session-9.scope)... Jan 17 00:40:41.608455 systemd[1]: Reloading... Jan 17 00:40:41.835148 zram_generator::config[2366]: No configuration found. Jan 17 00:40:42.384016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:40:42.550739 systemd[1]: Reloading finished in 941 ms. Jan 17 00:40:42.720640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:40:42.720925 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:40:42.721587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:42.730418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:43.383350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:43.405248 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:40:43.729145 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:40:43.729145 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:40:43.729145 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:40:43.729145 kubelet[2426]: I0117 00:40:43.725682 2426 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:40:44.410010 kubelet[2426]: I0117 00:40:44.409387 2426 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:40:44.410010 kubelet[2426]: I0117 00:40:44.409455 2426 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:40:44.413470 kubelet[2426]: I0117 00:40:44.412373 2426 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:40:44.510748 kubelet[2426]: I0117 00:40:44.510650 2426 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:40:44.512611 kubelet[2426]: E0117 00:40:44.512567 2426 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:44.532708 kubelet[2426]: E0117 00:40:44.532581 2426 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:40:44.532708 kubelet[2426]: I0117 00:40:44.532651 2426 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:40:44.542974 kubelet[2426]: I0117 00:40:44.542787 2426 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:40:44.543995 kubelet[2426]: I0117 00:40:44.543776 2426 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:40:44.544530 kubelet[2426]: I0117 00:40:44.543940 2426 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:40:44.544530 kubelet[2426]: I0117 00:40:44.544494 2426 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:40:44.544530 kubelet[2426]: I0117 00:40:44.544506 2426 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:40:44.546030 kubelet[2426]: I0117 00:40:44.544803 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:40:44.564521 kubelet[2426]: I0117 00:40:44.564309 2426 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:40:44.564521 kubelet[2426]: I0117 00:40:44.564478 2426 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:40:44.564781 kubelet[2426]: I0117 00:40:44.564550 2426 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:40:44.564781 kubelet[2426]: I0117 00:40:44.564570 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:40:44.583505 kubelet[2426]: W0117 00:40:44.583151 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:44.583505 kubelet[2426]: E0117 00:40:44.583333 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:44.583505 kubelet[2426]: W0117 00:40:44.583476 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:44.583756 kubelet[2426]: E0117 00:40:44.583533 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:44.599808 kubelet[2426]: I0117 00:40:44.594791 2426 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:40:44.599808 kubelet[2426]: I0117 00:40:44.599120 2426 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:40:44.612782 kubelet[2426]: W0117 00:40:44.607263 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:40:44.615311 kubelet[2426]: I0117 00:40:44.615280 2426 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:40:44.619987 kubelet[2426]: I0117 00:40:44.615480 2426 server.go:1287] "Started kubelet" Jan 17 00:40:44.622256 kubelet[2426]: I0117 00:40:44.619160 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:40:44.622256 kubelet[2426]: I0117 00:40:44.621131 2426 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:40:44.622256 kubelet[2426]: I0117 00:40:44.621486 2426 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:40:44.624130 kubelet[2426]: I0117 00:40:44.624108 2426 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:40:44.627562 kubelet[2426]: I0117 00:40:44.626407 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:40:44.629753 kubelet[2426]: I0117 00:40:44.629730 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:40:44.635289 kubelet[2426]: I0117 00:40:44.634985 2426 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:40:44.635490 kubelet[2426]: E0117 00:40:44.635396 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:40:44.641681 kubelet[2426]: I0117 00:40:44.640786 2426 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:40:44.641681 kubelet[2426]: I0117 00:40:44.640940 2426 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:40:44.642393 kubelet[2426]: E0117 00:40:44.636912 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="200ms" Jan 17 00:40:44.643107 kubelet[2426]: W0117 00:40:44.642947 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:44.643107 kubelet[2426]: E0117 00:40:44.643023 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:44.644966 kubelet[2426]: I0117 00:40:44.644355 2426 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:40:44.644966 kubelet[2426]: I0117 00:40:44.644438 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:40:44.662172 kubelet[2426]: I0117 00:40:44.662005 2426 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:40:44.664782 kubelet[2426]: E0117 00:40:44.664545 2426 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:40:44.740137 kubelet[2426]: E0117 00:40:44.709151 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.123:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.123:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5dd43158ddd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:40:44.615409112 +0000 UTC m=+1.168727076,LastTimestamp:2026-01-17 00:40:44.615409112 +0000 UTC m=+1.168727076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:40:44.790476 kubelet[2426]: E0117 00:40:44.790323 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:40:44.847547 kubelet[2426]: E0117 00:40:44.847465 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="400ms" Jan 17 00:40:44.871983 kubelet[2426]: I0117 00:40:44.871812 2426 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:40:44.873158 kubelet[2426]: I0117 00:40:44.872432 2426 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:40:44.873158 kubelet[2426]: I0117 00:40:44.872466 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:40:44.895310 kubelet[2426]: E0117 00:40:44.892969 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:40:44.897530 kubelet[2426]: I0117 00:40:44.896480 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:40:44.913441 kubelet[2426]: I0117 00:40:44.912518 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:40:44.913441 kubelet[2426]: I0117 00:40:44.912601 2426 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:40:44.913441 kubelet[2426]: I0117 00:40:44.912639 2426 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:40:44.913441 kubelet[2426]: I0117 00:40:44.912654 2426 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:40:44.913441 kubelet[2426]: E0117 00:40:44.912742 2426 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:40:44.926446 kubelet[2426]: W0117 00:40:44.925974 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:44.926446 kubelet[2426]: E0117 00:40:44.926065 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:44.993464 kubelet[2426]: E0117 00:40:44.993324 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:40:45.009274 kubelet[2426]: I0117 00:40:45.007562 2426 policy_none.go:49] "None policy: Start" Jan 17 00:40:45.009274 kubelet[2426]: I0117 00:40:45.007606 2426 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:40:45.009274 kubelet[2426]: I0117 00:40:45.007633 2426 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:40:45.015759 kubelet[2426]: E0117 00:40:45.015703 2426 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:40:45.047130 kubelet[2426]: I0117 00:40:45.045981 2426 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:40:45.047130 kubelet[2426]: I0117 00:40:45.046420 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:40:45.047130 kubelet[2426]: I0117 00:40:45.046443 2426 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:40:45.051977 kubelet[2426]: I0117 00:40:45.051951 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:40:45.062087 kubelet[2426]: E0117 00:40:45.062060 2426 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:40:45.067921 kubelet[2426]: E0117 00:40:45.062437 2426 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:40:45.212408 kubelet[2426]: I0117 00:40:45.209448 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:40:45.212408 kubelet[2426]: E0117 00:40:45.211266 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 17 00:40:45.248460 kubelet[2426]: E0117 00:40:45.248401 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="800ms" Jan 17 00:40:45.254055 kubelet[2426]: E0117 00:40:45.253954 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:45.263936 kubelet[2426]: E0117 00:40:45.262793 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:45.269957 kubelet[2426]: E0117 00:40:45.267317 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:45.296306 kubelet[2426]: I0117 00:40:45.295676 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e60b0e51803435df017df23b03cfc45-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e60b0e51803435df017df23b03cfc45\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:45.296306 kubelet[2426]: I0117 00:40:45.295796 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:45.296306 kubelet[2426]: I0117 00:40:45.295828 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:45.296306 kubelet[2426]: I0117 00:40:45.295917 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:45.296306 kubelet[2426]: I0117 00:40:45.295948 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e60b0e51803435df017df23b03cfc45-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e60b0e51803435df017df23b03cfc45\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:45.296662 kubelet[2426]: I0117 00:40:45.295974 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:45.296662 kubelet[2426]: I0117 00:40:45.296002 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:45.296662 kubelet[2426]: I0117 00:40:45.296027 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e60b0e51803435df017df23b03cfc45-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e60b0e51803435df017df23b03cfc45\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:45.296662 kubelet[2426]: I0117 00:40:45.296053 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:45.425282 kubelet[2426]: I0117 00:40:45.424348 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:40:45.425282 kubelet[2426]: E0117 00:40:45.424912 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 17 00:40:45.592791 kubelet[2426]: E0117 00:40:45.578107 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:45.592791 kubelet[2426]: E0117 00:40:45.578630 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:45.592791 kubelet[2426]: E0117 00:40:45.584546 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:45.648148 containerd[1588]: time="2026-01-17T00:40:45.636659011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 17 00:40:45.648148 containerd[1588]: time="2026-01-17T00:40:45.639100188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e60b0e51803435df017df23b03cfc45,Namespace:kube-system,Attempt:0,}" Jan 17 00:40:45.648148 containerd[1588]: time="2026-01-17T00:40:45.639989951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 17 00:40:45.849057 kubelet[2426]: I0117 00:40:45.848295 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:40:45.852489 kubelet[2426]: E0117 00:40:45.852439 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 17 00:40:45.888897 kubelet[2426]: W0117 00:40:45.885835 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:45.889676 kubelet[2426]: E0117 00:40:45.889316 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:45.895484 kubelet[2426]: W0117 00:40:45.895406 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:45.895680 kubelet[2426]: E0117 00:40:45.895648 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:46.075701 kubelet[2426]: E0117 00:40:46.071717 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="1.6s" Jan 17 00:40:46.098521 kubelet[2426]: W0117 00:40:46.097788 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:46.099343 kubelet[2426]: E0117 00:40:46.098697 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:46.470530 kubelet[2426]: W0117 00:40:46.469436 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:46.470530 kubelet[2426]: E0117 00:40:46.469509 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:46.526744 kubelet[2426]: E0117 00:40:46.523461 2426 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:46.689658 kubelet[2426]: I0117 00:40:46.689165 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:40:46.694432 kubelet[2426]: E0117 00:40:46.694329 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 17 00:40:46.905305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431842394.mount: Deactivated successfully. Jan 17 00:40:46.944464 containerd[1588]: time="2026-01-17T00:40:46.944039069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:46.951929 containerd[1588]: time="2026-01-17T00:40:46.951600782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:40:46.954465 containerd[1588]: time="2026-01-17T00:40:46.954119319Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:46.979422 containerd[1588]: time="2026-01-17T00:40:46.979092412Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:40:46.986488 containerd[1588]: time="2026-01-17T00:40:46.984597110Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:46.994454 containerd[1588]: time="2026-01-17T00:40:46.994330625Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:47.000374 containerd[1588]: time="2026-01-17T00:40:46.999982876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:40:47.015011 containerd[1588]: time="2026-01-17T00:40:47.014562173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:47.017729 containerd[1588]: time="2026-01-17T00:40:47.015779337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.376616934s" Jan 17 00:40:47.021438 containerd[1588]: time="2026-01-17T00:40:47.021359516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.384311948s" Jan 17 00:40:47.035665 containerd[1588]: time="2026-01-17T00:40:47.035429610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.395292674s" Jan 17 00:40:47.687603 kubelet[2426]: E0117 00:40:47.686922 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="3.2s" Jan 17 00:40:47.981753 containerd[1588]: time="2026-01-17T00:40:47.981024703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:47.984674 containerd[1588]: time="2026-01-17T00:40:47.982686560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:47.987001 containerd[1588]: time="2026-01-17T00:40:47.983767327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:47.987001 containerd[1588]: time="2026-01-17T00:40:47.984419035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:48.017451 containerd[1588]: time="2026-01-17T00:40:48.016823226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:48.018752 containerd[1588]: time="2026-01-17T00:40:48.017543430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:48.018752 containerd[1588]: time="2026-01-17T00:40:48.017573587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:48.018752 containerd[1588]: time="2026-01-17T00:40:48.017722565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:48.050161 containerd[1588]: time="2026-01-17T00:40:48.049818993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:48.050161 containerd[1588]: time="2026-01-17T00:40:48.049955098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:48.050161 containerd[1588]: time="2026-01-17T00:40:48.049975396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:48.050161 containerd[1588]: time="2026-01-17T00:40:48.050107633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:48.068911 kubelet[2426]: W0117 00:40:48.068643 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:48.068911 kubelet[2426]: E0117 00:40:48.068703 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:48.327526 kubelet[2426]: I0117 00:40:48.325947 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:40:48.327526 kubelet[2426]: E0117 00:40:48.326713 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 17 00:40:48.575581 containerd[1588]: time="2026-01-17T00:40:48.574959131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dd5d5f6b1483f8973a58bf33d35c7020bfad8fe3201771c4c70bcef90a84a2b\"" Jan 17 00:40:48.593070 kubelet[2426]: E0117 00:40:48.592132 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:48.598146 containerd[1588]: time="2026-01-17T00:40:48.597302958Z" level=info msg="CreateContainer within sandbox \"9dd5d5f6b1483f8973a58bf33d35c7020bfad8fe3201771c4c70bcef90a84a2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:40:48.598146 containerd[1588]: time="2026-01-17T00:40:48.597490307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e60b0e51803435df017df23b03cfc45,Namespace:kube-system,Attempt:0,} returns sandbox id \"e14c8e67e353dc01c4c0921f74a5e5a27a6e8afa36763283e7d162fa691c0e27\"" Jan 17 00:40:48.599914 kubelet[2426]: E0117 00:40:48.599403 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:48.617797 containerd[1588]: time="2026-01-17T00:40:48.616149218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"612972b69e307d88da58f27362d0257922bfb36ab50b00036484e8f2dda7ee19\"" Jan 17 00:40:48.618114 containerd[1588]: time="2026-01-17T00:40:48.617996538Z" level=info msg="CreateContainer within sandbox \"e14c8e67e353dc01c4c0921f74a5e5a27a6e8afa36763283e7d162fa691c0e27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:40:48.624063 kubelet[2426]: E0117 00:40:48.623064 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:48.637444 containerd[1588]: time="2026-01-17T00:40:48.637393604Z" level=info msg="CreateContainer within sandbox \"612972b69e307d88da58f27362d0257922bfb36ab50b00036484e8f2dda7ee19\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:40:48.673140 containerd[1588]: time="2026-01-17T00:40:48.672851258Z" level=info msg="CreateContainer within sandbox \"9dd5d5f6b1483f8973a58bf33d35c7020bfad8fe3201771c4c70bcef90a84a2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b9d799b37edf874f6a0651b5585f29adb085d2a7d8d8401f1d2120691466618\"" Jan 17 00:40:48.674410 containerd[1588]: time="2026-01-17T00:40:48.674379387Z" level=info msg="StartContainer for \"1b9d799b37edf874f6a0651b5585f29adb085d2a7d8d8401f1d2120691466618\"" Jan 17 00:40:48.704841 containerd[1588]: time="2026-01-17T00:40:48.704548803Z" level=info msg="CreateContainer within sandbox \"e14c8e67e353dc01c4c0921f74a5e5a27a6e8afa36763283e7d162fa691c0e27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0bc9a567ee86f68c368b72d4380144848bfc9557d26cf97dbd8aac6c712624a\"" Jan 17 00:40:48.704841 containerd[1588]: time="2026-01-17T00:40:48.705844725Z" level=info msg="StartContainer for \"f0bc9a567ee86f68c368b72d4380144848bfc9557d26cf97dbd8aac6c712624a\"" Jan 17 00:40:48.711115 containerd[1588]: time="2026-01-17T00:40:48.710988510Z" level=info msg="CreateContainer within sandbox \"612972b69e307d88da58f27362d0257922bfb36ab50b00036484e8f2dda7ee19\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ab9af56f0b5c782f7053b1d5c5bb6ec22bc27a284538357bf31f86b9b995909\"" Jan 17 00:40:48.714006 containerd[1588]: time="2026-01-17T00:40:48.713971732Z" level=info msg="StartContainer for \"8ab9af56f0b5c782f7053b1d5c5bb6ec22bc27a284538357bf31f86b9b995909\"" Jan 17 00:40:48.731401 kubelet[2426]: W0117 00:40:48.730592 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:48.731401 kubelet[2426]: E0117 00:40:48.730993 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:48.999627 kubelet[2426]: W0117 00:40:48.992584 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:48.999627 kubelet[2426]: E0117 00:40:48.992678 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:49.006369 kubelet[2426]: W0117 00:40:49.005326 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 17 00:40:49.006369 kubelet[2426]: E0117 00:40:49.005494 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:40:49.017617 systemd[1]: run-containerd-runc-k8s.io-612972b69e307d88da58f27362d0257922bfb36ab50b00036484e8f2dda7ee19-runc.BdZEb2.mount: Deactivated successfully. Jan 17 00:40:49.020427 containerd[1588]: time="2026-01-17T00:40:49.018627395Z" level=info msg="StartContainer for \"1b9d799b37edf874f6a0651b5585f29adb085d2a7d8d8401f1d2120691466618\" returns successfully" Jan 17 00:40:49.323503 containerd[1588]: time="2026-01-17T00:40:49.280140842Z" level=info msg="StartContainer for \"f0bc9a567ee86f68c368b72d4380144848bfc9557d26cf97dbd8aac6c712624a\" returns successfully" Jan 17 00:40:49.334598 containerd[1588]: time="2026-01-17T00:40:49.324787518Z" level=info msg="StartContainer for \"8ab9af56f0b5c782f7053b1d5c5bb6ec22bc27a284538357bf31f86b9b995909\" returns successfully" Jan 17 00:40:50.128473 kubelet[2426]: E0117 00:40:50.124257 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:50.136296 kubelet[2426]: E0117 00:40:50.134812 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:50.145967 kubelet[2426]: E0117 00:40:50.143394 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:50.145967 kubelet[2426]: E0117 00:40:50.144610 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:50.146297 kubelet[2426]: E0117 00:40:50.146172 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:50.146973 kubelet[2426]: E0117 00:40:50.146715 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:51.171478 kubelet[2426]: E0117 00:40:51.168539 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:51.171478 kubelet[2426]: E0117 00:40:51.168808 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:51.171478 kubelet[2426]: E0117 00:40:51.169097 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:51.171478 kubelet[2426]: E0117 00:40:51.169443 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:51.171478 kubelet[2426]: E0117 00:40:51.169559 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:51.175469 kubelet[2426]: E0117 00:40:51.174173 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:51.539505 kubelet[2426]: I0117 00:40:51.536050 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:40:52.177701 kubelet[2426]: E0117 00:40:52.176678 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:52.184779 kubelet[2426]: E0117 00:40:52.180126 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:52.184779 kubelet[2426]: E0117 00:40:52.181376 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:52.184779 kubelet[2426]: E0117 00:40:52.181537 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:52.986342 kubelet[2426]: E0117 00:40:52.985590 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:52.990767 kubelet[2426]: E0117 00:40:52.988409 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:53.179864 kubelet[2426]: E0117 00:40:53.179762 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:40:53.181098 kubelet[2426]: E0117 00:40:53.180031 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:54.824395 kubelet[2426]: E0117 00:40:54.823981 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 00:40:54.833508 kubelet[2426]: E0117 00:40:54.832971 2426 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5dd43158ddd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:40:44.615409112 +0000 UTC m=+1.168727076,LastTimestamp:2026-01-17 00:40:44.615409112 +0000 UTC m=+1.168727076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:40:55.040364 kubelet[2426]: I0117 00:40:55.038104 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:55.049157 kubelet[2426]: I0117 00:40:55.045934 2426 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:40:55.070970 kubelet[2426]: E0117 00:40:55.069975 2426 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5dd432017f19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:40:44.626460441 +0000 UTC m=+1.179778406,LastTimestamp:2026-01-17 00:40:44.626460441 +0000 UTC m=+1.179778406,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:40:55.102308 kubelet[2426]: E0117 00:40:55.102267 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:55.102539 kubelet[2426]: I0117 00:40:55.102521 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:55.109770 kubelet[2426]: E0117 00:40:55.108491 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:55.109770 kubelet[2426]: I0117 00:40:55.108551 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:55.109770 kubelet[2426]: E0117 00:40:55.110452 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:55.779359 kubelet[2426]: I0117 00:40:55.777879 2426 apiserver.go:52] "Watching apiserver" Jan 17 00:40:55.842963 kubelet[2426]: I0117 00:40:55.841634 2426 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:40:59.194464 kubelet[2426]: I0117 00:40:59.194141 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:59.218956 kubelet[2426]: E0117 00:40:59.216526 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:59.294514 kubelet[2426]: I0117 00:40:59.294091 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.294041627 podStartE2EDuration="294.041627ms" podCreationTimestamp="2026-01-17 00:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:40:59.292450827 +0000 UTC m=+15.845768791" watchObservedRunningTime="2026-01-17 00:40:59.294041627 +0000 UTC m=+15.847359601" Jan 17 00:40:59.428412 kubelet[2426]: E0117 00:40:59.425388 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:59.587484 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-9.scope)... Jan 17 00:40:59.587589 systemd[1]: Reloading... Jan 17 00:40:59.979899 zram_generator::config[2739]: No configuration found. Jan 17 00:41:00.446791 kubelet[2426]: E0117 00:41:00.443703 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:00.718675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:41:00.991414 systemd[1]: Reloading finished in 1402 ms. Jan 17 00:41:01.209123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:41:01.240656 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:41:01.241462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:41:01.387362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:41:02.532631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:41:02.533709 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:41:03.074818 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:41:03.074818 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:41:03.074818 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:41:03.074818 kubelet[2796]: I0117 00:41:03.073676 2796 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:41:03.102709 kubelet[2796]: I0117 00:41:03.102558 2796 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:41:03.102709 kubelet[2796]: I0117 00:41:03.102641 2796 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:41:03.103109 kubelet[2796]: I0117 00:41:03.103085 2796 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:41:03.109711 kubelet[2796]: I0117 00:41:03.109629 2796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:41:03.129525 kubelet[2796]: I0117 00:41:03.125708 2796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:41:03.151248 kubelet[2796]: E0117 00:41:03.149981 2796 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:41:03.151248 kubelet[2796]: I0117 00:41:03.150035 2796 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:41:03.215401 kubelet[2796]: I0117 00:41:03.215167 2796 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:41:03.218694 kubelet[2796]: I0117 00:41:03.218604 2796 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:41:03.221050 kubelet[2796]: I0117 00:41:03.218880 2796 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:41:03.221847 kubelet[2796]: I0117 00:41:03.221282 2796 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:41:03.221847 kubelet[2796]: I0117 00:41:03.221565 2796 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:41:03.222282 kubelet[2796]: I0117 00:41:03.221996 2796 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:41:03.223377 kubelet[2796]: I0117 00:41:03.223292 2796 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:41:03.223635 kubelet[2796]: I0117 00:41:03.223526 2796 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:41:03.224022 kubelet[2796]: I0117 00:41:03.223918 2796 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:41:03.224546 kubelet[2796]: I0117 00:41:03.224147 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:41:03.227448 kubelet[2796]: I0117 00:41:03.227424 2796 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:41:03.238035 kubelet[2796]: I0117 00:41:03.229809 2796 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:41:03.238035 kubelet[2796]: I0117 00:41:03.230828 2796 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:41:03.238035 kubelet[2796]: I0117 00:41:03.230869 2796 server.go:1287] "Started kubelet" Jan 17 00:41:03.238035 kubelet[2796]: I0117 00:41:03.234002 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:41:03.239483 kubelet[2796]: I0117 00:41:03.239432 2796 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:41:03.240298 kubelet[2796]: I0117 00:41:03.240277 2796 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:41:03.240717 kubelet[2796]: I0117 00:41:03.240619 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:41:03.245314 kubelet[2796]: I0117 00:41:03.241113 2796 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:41:03.266365 kubelet[2796]: I0117 00:41:03.252131 2796 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:41:03.266365 kubelet[2796]: I0117 00:41:03.262158 2796 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:41:03.266365 kubelet[2796]: I0117 00:41:03.263053 2796 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:41:03.266365 kubelet[2796]: I0117 00:41:03.263392 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:41:03.270451 kubelet[2796]: E0117 00:41:03.270289 2796 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:41:03.271029 kubelet[2796]: I0117 00:41:03.270990 2796 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:41:03.285304 kubelet[2796]: I0117 00:41:03.285101 2796 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:41:03.285689 kubelet[2796]: I0117 00:41:03.285380 2796 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:41:03.310081 kubelet[2796]: E0117 00:41:03.306544 2796 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:41:03.379018 kubelet[2796]: I0117 00:41:03.370447 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:41:03.382912 kubelet[2796]: I0117 00:41:03.380450 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:41:03.382912 kubelet[2796]: I0117 00:41:03.380604 2796 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:41:03.382912 kubelet[2796]: I0117 00:41:03.380635 2796 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:41:03.382912 kubelet[2796]: I0117 00:41:03.380646 2796 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:41:03.382912 kubelet[2796]: E0117 00:41:03.380724 2796 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:41:03.481901 kubelet[2796]: E0117 00:41:03.481610 2796 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:41:03.563987 kubelet[2796]: I0117 00:41:03.563438 2796 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:41:03.563987 kubelet[2796]: I0117 00:41:03.563621 2796 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:41:03.564969 kubelet[2796]: I0117 00:41:03.564491 2796 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:41:03.567298 kubelet[2796]: I0117 00:41:03.566277 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:41:03.567298 kubelet[2796]: I0117 00:41:03.566614 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:41:03.568468 kubelet[2796]: I0117 00:41:03.568294 2796 policy_none.go:49] "None policy: Start" Jan 17 00:41:03.571341 kubelet[2796]: I0117 00:41:03.568620 2796 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:41:03.571341 kubelet[2796]: I0117 00:41:03.568640 2796 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:41:03.571341 kubelet[2796]: I0117 00:41:03.568837 2796 state_mem.go:75] "Updated machine memory state" Jan 17 00:41:03.577308 kubelet[2796]: I0117 00:41:03.577242 2796 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:41:03.577521 kubelet[2796]: I0117 00:41:03.577470 2796 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:41:03.577589 kubelet[2796]: I0117 00:41:03.577518 2796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:41:03.580650 kubelet[2796]: I0117 00:41:03.578543 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:41:03.587552 kubelet[2796]: E0117 00:41:03.586859 2796 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:41:03.685651 kubelet[2796]: I0117 00:41:03.683838 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:41:03.685651 kubelet[2796]: I0117 00:41:03.684086 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:41:03.689909 kubelet[2796]: I0117 00:41:03.683838 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:41:03.714906 kubelet[2796]: I0117 00:41:03.714164 2796 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:41:03.738323 kubelet[2796]: E0117 00:41:03.738256 2796 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:41:03.745964 kubelet[2796]: I0117 00:41:03.745938 2796 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:41:03.746481 kubelet[2796]: I0117 00:41:03.746139 2796 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:41:03.792116 kubelet[2796]: I0117 00:41:03.792009 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:41:03.792116 kubelet[2796]: I0117 00:41:03.792107 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e60b0e51803435df017df23b03cfc45-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e60b0e51803435df017df23b03cfc45\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:41:03.792398 kubelet[2796]: I0117 00:41:03.792147 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e60b0e51803435df017df23b03cfc45-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e60b0e51803435df017df23b03cfc45\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:41:03.792398 kubelet[2796]: I0117 00:41:03.792262 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e60b0e51803435df017df23b03cfc45-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e60b0e51803435df017df23b03cfc45\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:41:03.792398 kubelet[2796]: I0117 00:41:03.792299 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:41:03.792398 kubelet[2796]: I0117 00:41:03.792327 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:41:03.792398 kubelet[2796]: I0117 00:41:03.792354 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:41:03.792557 kubelet[2796]: I0117 00:41:03.792384 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:41:03.792557 kubelet[2796]: I0117 00:41:03.792415 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:41:04.047874 kubelet[2796]: E0117 00:41:04.045560 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:04.055527 kubelet[2796]: E0117 00:41:04.045137 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:04.063168 kubelet[2796]: E0117 00:41:04.062892 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:04.229908 kubelet[2796]: I0117 00:41:04.228855 2796 apiserver.go:52] "Watching apiserver" Jan 17 00:41:04.286479 kubelet[2796]: I0117 00:41:04.285968 2796 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:41:04.405928 kubelet[2796]: I0117 00:41:04.402925 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.397657344 podStartE2EDuration="1.397657344s" podCreationTimestamp="2026-01-17 00:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:41:04.327875671 +0000 UTC m=+1.726720746" watchObservedRunningTime="2026-01-17 00:41:04.397657344 +0000 UTC m=+1.796502420" Jan 17 00:41:04.452723 kubelet[2796]: I0117 00:41:04.452562 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4525340820000001 podStartE2EDuration="1.452534082s" podCreationTimestamp="2026-01-17 00:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:41:04.403142867 +0000 UTC m=+1.801987942" watchObservedRunningTime="2026-01-17 00:41:04.452534082 +0000 UTC m=+1.851379167" Jan 17 00:41:04.489851 kubelet[2796]: E0117 00:41:04.489766 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:04.495269 kubelet[2796]: I0117 00:41:04.492536 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:41:04.495269 kubelet[2796]: E0117 00:41:04.493705 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:04.525139 kubelet[2796]: I0117 00:41:04.524922 2796 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:41:04.527013 containerd[1588]: time="2026-01-17T00:41:04.526881052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:41:04.527848 kubelet[2796]: I0117 00:41:04.527582 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:41:04.534127 kubelet[2796]: E0117 00:41:04.533700 2796 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:41:04.535281 kubelet[2796]: E0117 00:41:04.535067 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:05.416618 kubelet[2796]: W0117 00:41:05.416312 2796 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 17 00:41:05.416618 kubelet[2796]: E0117 00:41:05.416396 2796 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 17 00:41:05.416618 kubelet[2796]: I0117 00:41:05.416455 2796 status_manager.go:890] "Failed to get status for pod" podUID="7bd52fec-c7f3-4b9c-9398-f157b92aeecf" pod="kube-system/kube-proxy-xx8tt" err="pods \"kube-proxy-xx8tt\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 17 00:41:05.427630 kubelet[2796]: W0117 00:41:05.426789 2796 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 17 00:41:05.427630 kubelet[2796]: E0117 00:41:05.426836 2796 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 17 00:41:05.493482 kubelet[2796]: E0117 00:41:05.493382 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:05.502093 kubelet[2796]: E0117 00:41:05.495867 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:05.570710 kubelet[2796]: I0117 00:41:05.569690 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bd52fec-c7f3-4b9c-9398-f157b92aeecf-kube-proxy\") pod \"kube-proxy-xx8tt\" (UID: \"7bd52fec-c7f3-4b9c-9398-f157b92aeecf\") " pod="kube-system/kube-proxy-xx8tt" Jan 17 00:41:05.570710 kubelet[2796]: I0117 00:41:05.570427 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bd52fec-c7f3-4b9c-9398-f157b92aeecf-lib-modules\") pod \"kube-proxy-xx8tt\" (UID: \"7bd52fec-c7f3-4b9c-9398-f157b92aeecf\") " pod="kube-system/kube-proxy-xx8tt" Jan 17 00:41:05.570710 kubelet[2796]: I0117 00:41:05.570616 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bd52fec-c7f3-4b9c-9398-f157b92aeecf-xtables-lock\") pod \"kube-proxy-xx8tt\" (UID: \"7bd52fec-c7f3-4b9c-9398-f157b92aeecf\") " pod="kube-system/kube-proxy-xx8tt" Jan 17 00:41:05.570710 kubelet[2796]: I0117 00:41:05.570702 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwgj\" (UniqueName: \"kubernetes.io/projected/7bd52fec-c7f3-4b9c-9398-f157b92aeecf-kube-api-access-ckwgj\") pod \"kube-proxy-xx8tt\" (UID: \"7bd52fec-c7f3-4b9c-9398-f157b92aeecf\") " pod="kube-system/kube-proxy-xx8tt" Jan 17 00:41:06.496278 kubelet[2796]: E0117 00:41:06.495955 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:06.653268 kubelet[2796]: E0117 00:41:06.653146 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:06.655404 containerd[1588]: time="2026-01-17T00:41:06.655040337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx8tt,Uid:7bd52fec-c7f3-4b9c-9398-f157b92aeecf,Namespace:kube-system,Attempt:0,}" Jan 17 00:41:06.722319 kubelet[2796]: I0117 00:41:06.721323 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7d4f04b-e456-42ef-9dad-130d712092ab-var-lib-calico\") pod \"tigera-operator-7dcd859c48-84kmr\" (UID: \"e7d4f04b-e456-42ef-9dad-130d712092ab\") " pod="tigera-operator/tigera-operator-7dcd859c48-84kmr" Jan 17 00:41:06.722319 kubelet[2796]: I0117 00:41:06.721383 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rrhx\" (UniqueName: \"kubernetes.io/projected/e7d4f04b-e456-42ef-9dad-130d712092ab-kube-api-access-4rrhx\") pod \"tigera-operator-7dcd859c48-84kmr\" (UID: \"e7d4f04b-e456-42ef-9dad-130d712092ab\") " pod="tigera-operator/tigera-operator-7dcd859c48-84kmr" Jan 17 00:41:06.764101 containerd[1588]: time="2026-01-17T00:41:06.762574157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:06.764101 containerd[1588]: time="2026-01-17T00:41:06.762652403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:06.764101 containerd[1588]: time="2026-01-17T00:41:06.762689882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:06.764101 containerd[1588]: time="2026-01-17T00:41:06.762924238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:06.870576 containerd[1588]: time="2026-01-17T00:41:06.870050580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx8tt,Uid:7bd52fec-c7f3-4b9c-9398-f157b92aeecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ccb0a427e2a6897051462151f0b89bc88de53bd74e32ef91f00d81b2293932\"" Jan 17 00:41:06.871605 kubelet[2796]: E0117 00:41:06.871572 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:06.878910 kubelet[2796]: E0117 00:41:06.878770 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:06.880952 containerd[1588]: time="2026-01-17T00:41:06.879289574Z" level=info msg="CreateContainer within sandbox \"16ccb0a427e2a6897051462151f0b89bc88de53bd74e32ef91f00d81b2293932\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:41:06.887871 containerd[1588]: time="2026-01-17T00:41:06.887377144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-84kmr,Uid:e7d4f04b-e456-42ef-9dad-130d712092ab,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:41:06.950148 containerd[1588]: time="2026-01-17T00:41:06.949997143Z" level=info msg="CreateContainer within sandbox \"16ccb0a427e2a6897051462151f0b89bc88de53bd74e32ef91f00d81b2293932\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c82f6a29e3e69308ccee61b818d9ae10e5b8274c9ffc8f24294d239a3e11daf\"" Jan 17 00:41:06.952853 containerd[1588]: time="2026-01-17T00:41:06.952407424Z" level=info msg="StartContainer for \"9c82f6a29e3e69308ccee61b818d9ae10e5b8274c9ffc8f24294d239a3e11daf\"" Jan 17 00:41:07.049964 containerd[1588]: time="2026-01-17T00:41:07.049475089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:07.049964 containerd[1588]: time="2026-01-17T00:41:07.049684599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:07.049964 containerd[1588]: time="2026-01-17T00:41:07.049750982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:07.050968 containerd[1588]: time="2026-01-17T00:41:07.050065237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:07.245916 containerd[1588]: time="2026-01-17T00:41:07.245472219Z" level=info msg="StartContainer for \"9c82f6a29e3e69308ccee61b818d9ae10e5b8274c9ffc8f24294d239a3e11daf\" returns successfully" Jan 17 00:41:07.245916 containerd[1588]: time="2026-01-17T00:41:07.245497342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-84kmr,Uid:e7d4f04b-e456-42ef-9dad-130d712092ab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7596d8e084bb121327d927e538e090ffae74d7d3355ac581d07c742581a6581e\"" Jan 17 00:41:07.260832 containerd[1588]: time="2026-01-17T00:41:07.259482640Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:41:07.505969 kubelet[2796]: E0117 00:41:07.505842 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:07.510577 kubelet[2796]: E0117 00:41:07.509423 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:07.510577 kubelet[2796]: E0117 00:41:07.509930 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:07.571494 kubelet[2796]: I0117 00:41:07.568147 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xx8tt" podStartSLOduration=2.567899192 podStartE2EDuration="2.567899192s" podCreationTimestamp="2026-01-17 00:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:41:07.567464711 +0000 UTC m=+4.966309815" watchObservedRunningTime="2026-01-17 00:41:07.567899192 +0000 UTC m=+4.966744277" Jan 17 00:41:08.589502 kubelet[2796]: E0117 00:41:08.587622 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:09.853285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573172020.mount: Deactivated successfully. Jan 17 00:41:11.771475 kubelet[2796]: E0117 00:41:11.766642 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:14.088043 systemd-journald[1176]: Under memory pressure, flushing caches. Jan 17 00:41:14.042626 systemd-resolved[1470]: Under memory pressure, flushing caches. Jan 17 00:41:14.042905 systemd-resolved[1470]: Flushed all caches. Jan 17 00:41:14.223062 kubelet[2796]: E0117 00:41:14.222937 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:16.087584 systemd-resolved[1470]: Under memory pressure, flushing caches. Jan 17 00:41:16.108407 systemd-journald[1176]: Under memory pressure, flushing caches. Jan 17 00:41:16.087597 systemd-resolved[1470]: Flushed all caches. Jan 17 00:41:17.650997 containerd[1588]: time="2026-01-17T00:41:17.646152420Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:17.682236 containerd[1588]: time="2026-01-17T00:41:17.681526723Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:41:17.693432 containerd[1588]: time="2026-01-17T00:41:17.693332334Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:17.704907 containerd[1588]: time="2026-01-17T00:41:17.703483649Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:17.706077 containerd[1588]: time="2026-01-17T00:41:17.705152496Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 10.445584918s" Jan 17 00:41:17.706077 containerd[1588]: time="2026-01-17T00:41:17.705916362Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:41:17.812460 containerd[1588]: time="2026-01-17T00:41:17.811972046Z" level=info msg="CreateContainer within sandbox \"7596d8e084bb121327d927e538e090ffae74d7d3355ac581d07c742581a6581e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:41:18.109010 containerd[1588]: time="2026-01-17T00:41:18.108474106Z" level=info msg="CreateContainer within sandbox \"7596d8e084bb121327d927e538e090ffae74d7d3355ac581d07c742581a6581e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8de3567f07c2b233ee1e7440c0254e49caa333b04708ddb0c6b13f0663a7cdac\"" Jan 17 00:41:18.112088 containerd[1588]: time="2026-01-17T00:41:18.110420038Z" level=info msg="StartContainer for \"8de3567f07c2b233ee1e7440c0254e49caa333b04708ddb0c6b13f0663a7cdac\"" Jan 17 00:41:18.499635 containerd[1588]: time="2026-01-17T00:41:18.497138942Z" level=info msg="StartContainer for \"8de3567f07c2b233ee1e7440c0254e49caa333b04708ddb0c6b13f0663a7cdac\" returns successfully" Jan 17 00:41:19.278926 kubelet[2796]: I0117 00:41:19.274153 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-84kmr" podStartSLOduration=2.796740791 podStartE2EDuration="13.274020026s" podCreationTimestamp="2026-01-17 00:41:06 +0000 UTC" firstStartedPulling="2026-01-17 00:41:07.258693151 +0000 UTC m=+4.657538226" lastFinishedPulling="2026-01-17 00:41:17.735972386 +0000 UTC m=+15.134817461" observedRunningTime="2026-01-17 00:41:19.254896306 +0000 UTC m=+16.653741381" watchObservedRunningTime="2026-01-17 00:41:19.274020026 +0000 UTC m=+16.672865112" Jan 17 00:41:28.125828 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 17 00:41:28.136019 sshd[1793]: pam_unix(sshd:session): session closed for user core Jan 17 00:41:28.152304 systemd[1]: sshd@8-10.0.0.123:22-10.0.0.1:33180.service: Deactivated successfully. Jan 17 00:41:28.169661 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:41:28.170931 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:41:28.175258 systemd-logind[1555]: Removed session 9. Jan 17 00:41:47.301561 kubelet[2796]: I0117 00:41:47.300555 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5feb879c-e7a9-49ba-b22e-96bd22b86795-tigera-ca-bundle\") pod \"calico-typha-9646c7d8f-9mzbh\" (UID: \"5feb879c-e7a9-49ba-b22e-96bd22b86795\") " pod="calico-system/calico-typha-9646c7d8f-9mzbh" Jan 17 00:41:47.303341 kubelet[2796]: I0117 00:41:47.301571 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzns6\" (UniqueName: \"kubernetes.io/projected/5feb879c-e7a9-49ba-b22e-96bd22b86795-kube-api-access-wzns6\") pod \"calico-typha-9646c7d8f-9mzbh\" (UID: \"5feb879c-e7a9-49ba-b22e-96bd22b86795\") " pod="calico-system/calico-typha-9646c7d8f-9mzbh" Jan 17 00:41:47.303341 kubelet[2796]: I0117 00:41:47.302503 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5feb879c-e7a9-49ba-b22e-96bd22b86795-typha-certs\") pod \"calico-typha-9646c7d8f-9mzbh\" (UID: \"5feb879c-e7a9-49ba-b22e-96bd22b86795\") " pod="calico-system/calico-typha-9646c7d8f-9mzbh" Jan 17 00:41:47.614904 kubelet[2796]: I0117 00:41:47.612421 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/411527ec-3f3a-425f-89a5-5e6afe2cb457-tigera-ca-bundle\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.614904 kubelet[2796]: I0117 00:41:47.612484 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-cni-log-dir\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.614904 kubelet[2796]: I0117 00:41:47.612515 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-xtables-lock\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.614904 kubelet[2796]: I0117 00:41:47.612536 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjr2z\" (UniqueName: \"kubernetes.io/projected/411527ec-3f3a-425f-89a5-5e6afe2cb457-kube-api-access-sjr2z\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.614904 kubelet[2796]: I0117 00:41:47.612561 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-var-lib-calico\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.636636 kubelet[2796]: I0117 00:41:47.612585 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-lib-modules\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.636636 kubelet[2796]: I0117 00:41:47.612674 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-var-run-calico\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.636636 kubelet[2796]: I0117 00:41:47.612700 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/411527ec-3f3a-425f-89a5-5e6afe2cb457-node-certs\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.636636 kubelet[2796]: I0117 00:41:47.612723 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-flexvol-driver-host\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.636636 kubelet[2796]: I0117 00:41:47.612744 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-policysync\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.636974 kubelet[2796]: I0117 00:41:47.612770 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-cni-bin-dir\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.636974 kubelet[2796]: I0117 00:41:47.612794 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/411527ec-3f3a-425f-89a5-5e6afe2cb457-cni-net-dir\") pod \"calico-node-bhhdn\" (UID: \"411527ec-3f3a-425f-89a5-5e6afe2cb457\") " pod="calico-system/calico-node-bhhdn" Jan 17 00:41:47.752688 kubelet[2796]: E0117 00:41:47.749440 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.752688 kubelet[2796]: W0117 00:41:47.749486 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.752688 kubelet[2796]: E0117 00:41:47.749676 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.787022 kubelet[2796]: E0117 00:41:47.771700 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.787022 kubelet[2796]: W0117 00:41:47.771736 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.787022 kubelet[2796]: E0117 00:41:47.771769 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.787022 kubelet[2796]: E0117 00:41:47.772535 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.787022 kubelet[2796]: W0117 00:41:47.772556 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.787022 kubelet[2796]: E0117 00:41:47.772583 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.793299 kubelet[2796]: E0117 00:41:47.793167 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.793299 kubelet[2796]: W0117 00:41:47.793281 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.794065 kubelet[2796]: E0117 00:41:47.794035 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.795419 kubelet[2796]: E0117 00:41:47.794727 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.795419 kubelet[2796]: W0117 00:41:47.794744 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.795831 kubelet[2796]: E0117 00:41:47.795709 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.797730 kubelet[2796]: E0117 00:41:47.797309 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.797730 kubelet[2796]: W0117 00:41:47.797330 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.797730 kubelet[2796]: E0117 00:41:47.797352 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.838402 kubelet[2796]: E0117 00:41:47.832970 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.838402 kubelet[2796]: W0117 00:41:47.833003 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.838402 kubelet[2796]: E0117 00:41:47.833034 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.886314 kubelet[2796]: E0117 00:41:47.885532 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:41:47.895281 kubelet[2796]: E0117 00:41:47.894467 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.895281 kubelet[2796]: W0117 00:41:47.894541 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.895281 kubelet[2796]: E0117 00:41:47.894571 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.896419 kubelet[2796]: E0117 00:41:47.896355 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.896419 kubelet[2796]: W0117 00:41:47.896399 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.896551 kubelet[2796]: E0117 00:41:47.896422 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.900446 kubelet[2796]: E0117 00:41:47.900360 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.900446 kubelet[2796]: W0117 00:41:47.900402 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.900646 kubelet[2796]: E0117 00:41:47.900477 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.907788 kubelet[2796]: E0117 00:41:47.901294 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.907788 kubelet[2796]: W0117 00:41:47.901439 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.907788 kubelet[2796]: E0117 00:41:47.901458 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.919751 kubelet[2796]: E0117 00:41:47.919320 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.923797 kubelet[2796]: W0117 00:41:47.919445 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.923797 kubelet[2796]: E0117 00:41:47.921303 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.923797 kubelet[2796]: E0117 00:41:47.923715 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.923797 kubelet[2796]: W0117 00:41:47.923732 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.923797 kubelet[2796]: E0117 00:41:47.923754 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.927109 kubelet[2796]: E0117 00:41:47.926561 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.927109 kubelet[2796]: W0117 00:41:47.926577 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.927109 kubelet[2796]: E0117 00:41:47.926682 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.927924 kubelet[2796]: E0117 00:41:47.927872 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.927924 kubelet[2796]: W0117 00:41:47.927903 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.928048 kubelet[2796]: E0117 00:41:47.928002 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.929433 kubelet[2796]: E0117 00:41:47.929130 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.929433 kubelet[2796]: W0117 00:41:47.929160 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.929433 kubelet[2796]: E0117 00:41:47.929250 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.936347 kubelet[2796]: E0117 00:41:47.934039 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.936347 kubelet[2796]: W0117 00:41:47.934070 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.936347 kubelet[2796]: E0117 00:41:47.934281 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.941415 kubelet[2796]: E0117 00:41:47.939438 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.941415 kubelet[2796]: W0117 00:41:47.939529 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.941415 kubelet[2796]: E0117 00:41:47.939559 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.943378 kubelet[2796]: E0117 00:41:47.943352 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.944753 kubelet[2796]: W0117 00:41:47.943567 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.944753 kubelet[2796]: E0117 00:41:47.944700 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.946809 kubelet[2796]: E0117 00:41:47.946476 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.947348 kubelet[2796]: W0117 00:41:47.946907 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.947348 kubelet[2796]: E0117 00:41:47.946937 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.950249 kubelet[2796]: E0117 00:41:47.947907 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.950249 kubelet[2796]: W0117 00:41:47.947921 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.950249 kubelet[2796]: E0117 00:41:47.947939 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.961496 kubelet[2796]: E0117 00:41:47.960984 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.961496 kubelet[2796]: W0117 00:41:47.961023 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.961496 kubelet[2796]: E0117 00:41:47.961066 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.962453 kubelet[2796]: E0117 00:41:47.962265 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.962453 kubelet[2796]: W0117 00:41:47.962287 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.962453 kubelet[2796]: E0117 00:41:47.962313 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.963269 kubelet[2796]: E0117 00:41:47.963135 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.963269 kubelet[2796]: W0117 00:41:47.963156 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.968506 kubelet[2796]: E0117 00:41:47.967972 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.971421 kubelet[2796]: I0117 00:41:47.971382 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d6f5cd7-ec64-4020-903c-bd9456eec0b4-registration-dir\") pod \"csi-node-driver-7kh68\" (UID: \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\") " pod="calico-system/csi-node-driver-7kh68" Jan 17 00:41:47.972697 kubelet[2796]: E0117 00:41:47.971679 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.972697 kubelet[2796]: W0117 00:41:47.972028 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.972697 kubelet[2796]: E0117 00:41:47.972061 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.974119 kubelet[2796]: E0117 00:41:47.974065 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.974668 kubelet[2796]: W0117 00:41:47.974401 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.974927 kubelet[2796]: E0117 00:41:47.974899 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.975399 kubelet[2796]: E0117 00:41:47.975288 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.977257 kubelet[2796]: W0117 00:41:47.975411 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.977257 kubelet[2796]: E0117 00:41:47.975433 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.977257 kubelet[2796]: I0117 00:41:47.975539 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6f5cd7-ec64-4020-903c-bd9456eec0b4-kubelet-dir\") pod \"csi-node-driver-7kh68\" (UID: \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\") " pod="calico-system/csi-node-driver-7kh68" Jan 17 00:41:47.977257 kubelet[2796]: E0117 00:41:47.976979 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.977257 kubelet[2796]: W0117 00:41:47.977090 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.977257 kubelet[2796]: E0117 00:41:47.977110 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.977509 kubelet[2796]: I0117 00:41:47.977282 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d6f5cd7-ec64-4020-903c-bd9456eec0b4-socket-dir\") pod \"csi-node-driver-7kh68\" (UID: \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\") " pod="calico-system/csi-node-driver-7kh68" Jan 17 00:41:47.979473 kubelet[2796]: E0117 00:41:47.979352 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.979473 kubelet[2796]: W0117 00:41:47.979397 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.979711 kubelet[2796]: E0117 00:41:47.979581 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.979711 kubelet[2796]: I0117 00:41:47.979672 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1d6f5cd7-ec64-4020-903c-bd9456eec0b4-varrun\") pod \"csi-node-driver-7kh68\" (UID: \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\") " pod="calico-system/csi-node-driver-7kh68" Jan 17 00:41:47.984427 kubelet[2796]: E0117 00:41:47.982500 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.984427 kubelet[2796]: W0117 00:41:47.983986 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.985407 kubelet[2796]: E0117 00:41:47.985357 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.988767 kubelet[2796]: E0117 00:41:47.988740 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.988767 kubelet[2796]: W0117 00:41:47.988762 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.988983 kubelet[2796]: E0117 00:41:47.988929 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.990246 kubelet[2796]: E0117 00:41:47.989897 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.990246 kubelet[2796]: W0117 00:41:47.989913 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.990246 kubelet[2796]: E0117 00:41:47.990085 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.997410 kubelet[2796]: E0117 00:41:47.993111 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.997410 kubelet[2796]: W0117 00:41:47.993138 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.997410 kubelet[2796]: E0117 00:41:47.993475 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.997410 kubelet[2796]: E0117 00:41:47.993783 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.997410 kubelet[2796]: W0117 00:41:47.993797 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.997410 kubelet[2796]: E0117 00:41:47.994012 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.997410 kubelet[2796]: E0117 00:41:47.994294 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:47.997410 kubelet[2796]: W0117 00:41:47.994305 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:47.997410 kubelet[2796]: E0117 00:41:47.994421 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:47.997410 kubelet[2796]: E0117 00:41:47.996731 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.007028 kubelet[2796]: W0117 00:41:47.996748 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.007028 kubelet[2796]: E0117 00:41:47.996812 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.007028 kubelet[2796]: E0117 00:41:47.997330 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.007028 kubelet[2796]: W0117 00:41:47.997344 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.007028 kubelet[2796]: E0117 00:41:47.997358 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.007028 kubelet[2796]: E0117 00:41:47.998089 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.007028 kubelet[2796]: W0117 00:41:47.998104 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.007028 kubelet[2796]: E0117 00:41:47.998121 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.009447 kubelet[2796]: E0117 00:41:48.009365 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:48.009999 kubelet[2796]: E0117 00:41:48.009864 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.009999 kubelet[2796]: W0117 00:41:48.009919 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.009999 kubelet[2796]: E0117 00:41:48.009951 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.016471 containerd[1588]: time="2026-01-17T00:41:48.016396911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9646c7d8f-9mzbh,Uid:5feb879c-e7a9-49ba-b22e-96bd22b86795,Namespace:calico-system,Attempt:0,}" Jan 17 00:41:48.046921 kubelet[2796]: E0117 00:41:48.044571 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:48.048280 containerd[1588]: time="2026-01-17T00:41:48.047793835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhhdn,Uid:411527ec-3f3a-425f-89a5-5e6afe2cb457,Namespace:calico-system,Attempt:0,}" Jan 17 00:41:48.085811 kubelet[2796]: E0117 00:41:48.085298 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.085811 kubelet[2796]: W0117 00:41:48.085340 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.085811 kubelet[2796]: E0117 00:41:48.085417 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.088037 kubelet[2796]: E0117 00:41:48.087694 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.088037 kubelet[2796]: W0117 00:41:48.087735 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.088037 kubelet[2796]: E0117 00:41:48.087801 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.093953 kubelet[2796]: E0117 00:41:48.093814 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.093953 kubelet[2796]: W0117 00:41:48.093833 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.094312 kubelet[2796]: E0117 00:41:48.094079 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.095450 kubelet[2796]: E0117 00:41:48.095291 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.095450 kubelet[2796]: W0117 00:41:48.095444 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.098221 kubelet[2796]: E0117 00:41:48.097793 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.102000 kubelet[2796]: E0117 00:41:48.101029 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.102000 kubelet[2796]: W0117 00:41:48.101133 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.103829 kubelet[2796]: E0117 00:41:48.103317 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.106494 kubelet[2796]: E0117 00:41:48.105848 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.106494 kubelet[2796]: W0117 00:41:48.105866 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.106494 kubelet[2796]: E0117 00:41:48.106137 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.107476 kubelet[2796]: E0117 00:41:48.106706 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.107476 kubelet[2796]: W0117 00:41:48.106719 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.107476 kubelet[2796]: E0117 00:41:48.107141 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.110356 kubelet[2796]: E0117 00:41:48.109668 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.110356 kubelet[2796]: W0117 00:41:48.109772 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.110356 kubelet[2796]: E0117 00:41:48.110270 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.111898 kubelet[2796]: E0117 00:41:48.111635 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.111898 kubelet[2796]: W0117 00:41:48.111655 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.111898 kubelet[2796]: E0117 00:41:48.111895 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.112503 kubelet[2796]: E0117 00:41:48.112484 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.112790 kubelet[2796]: W0117 00:41:48.112579 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.113127 kubelet[2796]: E0117 00:41:48.112877 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.114078 kubelet[2796]: I0117 00:41:48.113864 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9l4t\" (UniqueName: \"kubernetes.io/projected/1d6f5cd7-ec64-4020-903c-bd9456eec0b4-kube-api-access-v9l4t\") pod \"csi-node-driver-7kh68\" (UID: \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\") " pod="calico-system/csi-node-driver-7kh68" Jan 17 00:41:48.114078 kubelet[2796]: E0117 00:41:48.113990 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.114078 kubelet[2796]: W0117 00:41:48.114003 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.114431 kubelet[2796]: E0117 00:41:48.114317 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.115653 kubelet[2796]: E0117 00:41:48.115631 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.115892 kubelet[2796]: W0117 00:41:48.115724 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.116076 kubelet[2796]: E0117 00:41:48.116055 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.116532 kubelet[2796]: E0117 00:41:48.116379 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.116532 kubelet[2796]: W0117 00:41:48.116395 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.116933 kubelet[2796]: E0117 00:41:48.116686 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.117927 kubelet[2796]: E0117 00:41:48.117874 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.118276 kubelet[2796]: W0117 00:41:48.118172 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.118526 kubelet[2796]: E0117 00:41:48.118453 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.119103 kubelet[2796]: E0117 00:41:48.118963 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.119103 kubelet[2796]: W0117 00:41:48.118981 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.119103 kubelet[2796]: E0117 00:41:48.119076 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.119932 kubelet[2796]: E0117 00:41:48.119799 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.119932 kubelet[2796]: W0117 00:41:48.119913 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.120901 kubelet[2796]: E0117 00:41:48.120392 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.121705 kubelet[2796]: E0117 00:41:48.121629 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.121705 kubelet[2796]: W0117 00:41:48.121671 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.121922 kubelet[2796]: E0117 00:41:48.121875 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.122814 kubelet[2796]: E0117 00:41:48.122797 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.122896 kubelet[2796]: W0117 00:41:48.122877 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.123140 kubelet[2796]: E0117 00:41:48.123118 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.125284 kubelet[2796]: E0117 00:41:48.125054 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.125284 kubelet[2796]: W0117 00:41:48.125071 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.125405 kubelet[2796]: E0117 00:41:48.125388 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.125778 kubelet[2796]: E0117 00:41:48.125759 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.125934 kubelet[2796]: W0117 00:41:48.125862 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.126302 kubelet[2796]: E0117 00:41:48.126251 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.126780 kubelet[2796]: E0117 00:41:48.126544 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.126780 kubelet[2796]: W0117 00:41:48.126562 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.126780 kubelet[2796]: E0117 00:41:48.126663 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.127048 kubelet[2796]: E0117 00:41:48.127011 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.127048 kubelet[2796]: W0117 00:41:48.127045 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.127139 kubelet[2796]: E0117 00:41:48.127062 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.128160 kubelet[2796]: E0117 00:41:48.128018 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.128160 kubelet[2796]: W0117 00:41:48.128061 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.128160 kubelet[2796]: E0117 00:41:48.128081 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.219543 kubelet[2796]: E0117 00:41:48.219109 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.219543 kubelet[2796]: W0117 00:41:48.219349 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.219543 kubelet[2796]: E0117 00:41:48.219381 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.272843 kubelet[2796]: E0117 00:41:48.271042 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.272843 kubelet[2796]: W0117 00:41:48.271323 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.272843 kubelet[2796]: E0117 00:41:48.271722 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.272843 kubelet[2796]: E0117 00:41:48.272508 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.272843 kubelet[2796]: W0117 00:41:48.272522 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.272843 kubelet[2796]: E0117 00:41:48.272643 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.281353 kubelet[2796]: E0117 00:41:48.280931 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.281353 kubelet[2796]: W0117 00:41:48.280972 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.281353 kubelet[2796]: E0117 00:41:48.281004 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.284546 kubelet[2796]: E0117 00:41:48.282537 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.284546 kubelet[2796]: W0117 00:41:48.282552 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.284546 kubelet[2796]: E0117 00:41:48.282570 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.289701 containerd[1588]: time="2026-01-17T00:41:48.289399485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:48.292129 containerd[1588]: time="2026-01-17T00:41:48.289537532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:48.292129 containerd[1588]: time="2026-01-17T00:41:48.290356870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:48.292129 containerd[1588]: time="2026-01-17T00:41:48.290534921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:48.319357 containerd[1588]: time="2026-01-17T00:41:48.312456310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:48.319357 containerd[1588]: time="2026-01-17T00:41:48.312651394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:48.319357 containerd[1588]: time="2026-01-17T00:41:48.312680037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:48.319357 containerd[1588]: time="2026-01-17T00:41:48.312843843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:48.340854 kubelet[2796]: E0117 00:41:48.340714 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:41:48.340854 kubelet[2796]: W0117 00:41:48.340750 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:41:48.340854 kubelet[2796]: E0117 00:41:48.340784 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:41:48.682733 containerd[1588]: time="2026-01-17T00:41:48.682556704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhhdn,Uid:411527ec-3f3a-425f-89a5-5e6afe2cb457,Namespace:calico-system,Attempt:0,} returns sandbox id \"c652be471fbcc7b181c4c952a70efe202c4a5351c3170e7b21849053204f9f9d\"" Jan 17 00:41:48.704441 kubelet[2796]: E0117 00:41:48.704336 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:48.708037 containerd[1588]: time="2026-01-17T00:41:48.706558531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9646c7d8f-9mzbh,Uid:5feb879c-e7a9-49ba-b22e-96bd22b86795,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e137517c61e24cd1c3bba564e21a403b0f5f7c43c4230673223248cb61bfd3a\"" Jan 17 00:41:48.708037 containerd[1588]: time="2026-01-17T00:41:48.707344426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:41:48.709094 kubelet[2796]: E0117 00:41:48.708868 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:49.385714 kubelet[2796]: E0117 00:41:49.383082 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:41:49.975974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192939498.mount: Deactivated successfully. Jan 17 00:41:50.692865 containerd[1588]: time="2026-01-17T00:41:50.689560708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:50.692865 containerd[1588]: time="2026-01-17T00:41:50.692886388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 17 00:41:50.696068 containerd[1588]: time="2026-01-17T00:41:50.695391719Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:50.702139 containerd[1588]: time="2026-01-17T00:41:50.701447128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:50.705677 containerd[1588]: time="2026-01-17T00:41:50.705632528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.998206019s" Jan 17 00:41:50.705759 containerd[1588]: time="2026-01-17T00:41:50.705684235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:41:50.719922 containerd[1588]: time="2026-01-17T00:41:50.718060859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:41:50.770463 containerd[1588]: time="2026-01-17T00:41:50.770306014Z" level=info msg="CreateContainer within sandbox \"c652be471fbcc7b181c4c952a70efe202c4a5351c3170e7b21849053204f9f9d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:41:50.877244 containerd[1588]: time="2026-01-17T00:41:50.877022948Z" level=info msg="CreateContainer within sandbox \"c652be471fbcc7b181c4c952a70efe202c4a5351c3170e7b21849053204f9f9d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"29e4d579fd520eaa1287df1a0d33f42f1e0ea783e5e8e5268a7d8952126631ae\"" Jan 17 00:41:50.887174 containerd[1588]: time="2026-01-17T00:41:50.884812076Z" level=info msg="StartContainer for \"29e4d579fd520eaa1287df1a0d33f42f1e0ea783e5e8e5268a7d8952126631ae\"" Jan 17 00:41:51.116802 containerd[1588]: time="2026-01-17T00:41:51.116740555Z" level=info msg="StartContainer for \"29e4d579fd520eaa1287df1a0d33f42f1e0ea783e5e8e5268a7d8952126631ae\" returns successfully" Jan 17 00:41:51.390815 kubelet[2796]: E0117 00:41:51.387432 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:41:51.437519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29e4d579fd520eaa1287df1a0d33f42f1e0ea783e5e8e5268a7d8952126631ae-rootfs.mount: Deactivated successfully. Jan 17 00:41:51.468978 containerd[1588]: time="2026-01-17T00:41:51.468474283Z" level=info msg="shim disconnected" id=29e4d579fd520eaa1287df1a0d33f42f1e0ea783e5e8e5268a7d8952126631ae namespace=k8s.io Jan 17 00:41:51.468978 containerd[1588]: time="2026-01-17T00:41:51.468679505Z" level=warning msg="cleaning up after shim disconnected" id=29e4d579fd520eaa1287df1a0d33f42f1e0ea783e5e8e5268a7d8952126631ae namespace=k8s.io Jan 17 00:41:51.468978 containerd[1588]: time="2026-01-17T00:41:51.468698962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:41:51.545282 containerd[1588]: time="2026-01-17T00:41:51.541751571Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:41:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:41:52.039431 kubelet[2796]: E0117 00:41:52.038775 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:53.391978 kubelet[2796]: E0117 00:41:53.391074 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:41:54.904792 containerd[1588]: time="2026-01-17T00:41:54.898838371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:54.904792 containerd[1588]: time="2026-01-17T00:41:54.900939612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 17 00:41:54.910504 containerd[1588]: time="2026-01-17T00:41:54.906901047Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:54.921395 containerd[1588]: time="2026-01-17T00:41:54.920035321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:54.921395 containerd[1588]: time="2026-01-17T00:41:54.921095248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.20298166s" Jan 17 00:41:54.921395 containerd[1588]: time="2026-01-17T00:41:54.921141624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:41:54.926125 containerd[1588]: time="2026-01-17T00:41:54.925843502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:41:55.000510 containerd[1588]: time="2026-01-17T00:41:55.000452100Z" level=info msg="CreateContainer within sandbox \"8e137517c61e24cd1c3bba564e21a403b0f5f7c43c4230673223248cb61bfd3a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:41:55.063896 containerd[1588]: time="2026-01-17T00:41:55.063490448Z" level=info msg="CreateContainer within sandbox \"8e137517c61e24cd1c3bba564e21a403b0f5f7c43c4230673223248cb61bfd3a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1b61da7f66659a6c5da30655673a11918dc44c197b1dc7078194242c37cd8d68\"" Jan 17 00:41:55.069688 containerd[1588]: time="2026-01-17T00:41:55.069367286Z" level=info msg="StartContainer for \"1b61da7f66659a6c5da30655673a11918dc44c197b1dc7078194242c37cd8d68\"" Jan 17 00:41:55.312036 containerd[1588]: time="2026-01-17T00:41:55.311856642Z" level=info msg="StartContainer for \"1b61da7f66659a6c5da30655673a11918dc44c197b1dc7078194242c37cd8d68\" returns successfully" Jan 17 00:41:55.393934 kubelet[2796]: E0117 00:41:55.392640 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:41:56.070871 kubelet[2796]: E0117 00:41:56.067530 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:56.110872 kubelet[2796]: I0117 00:41:56.110167 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9646c7d8f-9mzbh" podStartSLOduration=3.897493672 podStartE2EDuration="10.110145297s" podCreationTimestamp="2026-01-17 00:41:46 +0000 UTC" firstStartedPulling="2026-01-17 00:41:48.712438222 +0000 UTC m=+46.111283297" lastFinishedPulling="2026-01-17 00:41:54.925089846 +0000 UTC m=+52.323934922" observedRunningTime="2026-01-17 00:41:56.105746705 +0000 UTC m=+53.504591801" watchObservedRunningTime="2026-01-17 00:41:56.110145297 +0000 UTC m=+53.508990372" Jan 17 00:41:57.073272 kubelet[2796]: E0117 00:41:57.073057 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:57.384435 kubelet[2796]: E0117 00:41:57.382992 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:41:58.082685 kubelet[2796]: E0117 00:41:58.080630 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:59.383755 kubelet[2796]: E0117 00:41:59.382719 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:01.416470 kubelet[2796]: E0117 00:42:01.412809 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:02.195124 containerd[1588]: time="2026-01-17T00:42:02.194026927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:02.203028 containerd[1588]: time="2026-01-17T00:42:02.200217650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:42:02.207827 containerd[1588]: time="2026-01-17T00:42:02.207710432Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:02.233460 containerd[1588]: time="2026-01-17T00:42:02.232649900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:02.260309 containerd[1588]: time="2026-01-17T00:42:02.247575251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.321474309s" Jan 17 00:42:02.260309 containerd[1588]: time="2026-01-17T00:42:02.253336364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:42:02.305337 containerd[1588]: time="2026-01-17T00:42:02.305275329Z" level=info msg="CreateContainer within sandbox \"c652be471fbcc7b181c4c952a70efe202c4a5351c3170e7b21849053204f9f9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:42:02.633718 containerd[1588]: time="2026-01-17T00:42:02.633133706Z" level=info msg="CreateContainer within sandbox \"c652be471fbcc7b181c4c952a70efe202c4a5351c3170e7b21849053204f9f9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cacd2a55b7c6291c98953e32aaa4802fd6c7706245b00dea03b10495d735a9dd\"" Jan 17 00:42:02.636325 containerd[1588]: time="2026-01-17T00:42:02.636151967Z" level=info msg="StartContainer for \"cacd2a55b7c6291c98953e32aaa4802fd6c7706245b00dea03b10495d735a9dd\"" Jan 17 00:42:05.338290 kubelet[2796]: E0117 00:42:05.326799 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:06.706951 containerd[1588]: time="2026-01-17T00:42:06.692956018Z" level=error msg="get state for cacd2a55b7c6291c98953e32aaa4802fd6c7706245b00dea03b10495d735a9dd" error="context deadline exceeded: unknown" Jan 17 00:42:06.706951 containerd[1588]: time="2026-01-17T00:42:06.693396648Z" level=warning msg="unknown status" status=0 Jan 17 00:42:06.733086 containerd[1588]: time="2026-01-17T00:42:06.732848086Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 17 00:42:06.870052 containerd[1588]: time="2026-01-17T00:42:06.868075945Z" level=info msg="StartContainer for \"cacd2a55b7c6291c98953e32aaa4802fd6c7706245b00dea03b10495d735a9dd\" returns successfully" Jan 17 00:42:06.894373 kubelet[2796]: E0117 00:42:06.893982 2796 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.168s" Jan 17 00:42:07.910530 kubelet[2796]: E0117 00:42:07.910330 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:07.917873 kubelet[2796]: E0117 00:42:07.912071 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:08.916018 kubelet[2796]: E0117 00:42:08.915766 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:09.381766 kubelet[2796]: E0117 00:42:09.381677 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:09.382294 kubelet[2796]: E0117 00:42:09.382150 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:09.483875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cacd2a55b7c6291c98953e32aaa4802fd6c7706245b00dea03b10495d735a9dd-rootfs.mount: Deactivated successfully. Jan 17 00:42:09.491851 kubelet[2796]: I0117 00:42:09.491777 2796 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:42:09.494110 containerd[1588]: time="2026-01-17T00:42:09.493800545Z" level=info msg="shim disconnected" id=cacd2a55b7c6291c98953e32aaa4802fd6c7706245b00dea03b10495d735a9dd namespace=k8s.io Jan 17 00:42:09.494110 containerd[1588]: time="2026-01-17T00:42:09.493843385Z" level=warning msg="cleaning up after shim disconnected" id=cacd2a55b7c6291c98953e32aaa4802fd6c7706245b00dea03b10495d735a9dd namespace=k8s.io Jan 17 00:42:09.494110 containerd[1588]: time="2026-01-17T00:42:09.493858413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:42:09.584537 kubelet[2796]: I0117 00:42:09.583466 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjs9t\" (UniqueName: \"kubernetes.io/projected/5e3f00cb-8452-40aa-ab56-8dc0975dc08a-kube-api-access-vjs9t\") pod \"calico-kube-controllers-744d6dbcbc-9t986\" (UID: \"5e3f00cb-8452-40aa-ab56-8dc0975dc08a\") " pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" Jan 17 00:42:09.584537 kubelet[2796]: I0117 00:42:09.583517 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfbb8f9c-282b-42d0-90d7-a8ecd35e843f-config-volume\") pod \"coredns-668d6bf9bc-mvkvx\" (UID: \"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f\") " pod="kube-system/coredns-668d6bf9bc-mvkvx" Jan 17 00:42:09.584537 kubelet[2796]: I0117 00:42:09.583547 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3f00cb-8452-40aa-ab56-8dc0975dc08a-tigera-ca-bundle\") pod \"calico-kube-controllers-744d6dbcbc-9t986\" (UID: \"5e3f00cb-8452-40aa-ab56-8dc0975dc08a\") " pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" Jan 17 00:42:09.584537 kubelet[2796]: I0117 00:42:09.583666 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7fw\" (UniqueName: \"kubernetes.io/projected/bfbb8f9c-282b-42d0-90d7-a8ecd35e843f-kube-api-access-wc7fw\") pod \"coredns-668d6bf9bc-mvkvx\" (UID: \"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f\") " pod="kube-system/coredns-668d6bf9bc-mvkvx" Jan 17 00:42:09.686572 kubelet[2796]: I0117 00:42:09.684532 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc8z8\" (UniqueName: \"kubernetes.io/projected/12e8789b-d87c-447d-950e-1991d31141d1-kube-api-access-qc8z8\") pod \"goldmane-666569f655-g2n27\" (UID: \"12e8789b-d87c-447d-950e-1991d31141d1\") " pod="calico-system/goldmane-666569f655-g2n27" Jan 17 00:42:09.686572 kubelet[2796]: I0117 00:42:09.684679 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/907968b1-857c-479e-a0ab-2b58db52b182-calico-apiserver-certs\") pod \"calico-apiserver-7b478fd4fd-xbslh\" (UID: \"907968b1-857c-479e-a0ab-2b58db52b182\") " pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" Jan 17 00:42:09.686572 kubelet[2796]: I0117 00:42:09.684716 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa612b8d-2f4c-467c-9d4c-78c8e06b8f95-calico-apiserver-certs\") pod \"calico-apiserver-7b478fd4fd-bk9rz\" (UID: \"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95\") " pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" Jan 17 00:42:09.686572 kubelet[2796]: I0117 00:42:09.684747 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80e5c881-067d-4192-8764-acc37b9b15b6-config-volume\") pod \"coredns-668d6bf9bc-mtq9l\" (UID: \"80e5c881-067d-4192-8764-acc37b9b15b6\") " pod="kube-system/coredns-668d6bf9bc-mtq9l" Jan 17 00:42:09.686572 kubelet[2796]: I0117 00:42:09.684776 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtqs\" (UniqueName: \"kubernetes.io/projected/d71d28bd-74af-4e97-9fb5-9a08939c13d5-kube-api-access-djtqs\") pod \"whisker-6889d59764-9h7nx\" (UID: \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\") " pod="calico-system/whisker-6889d59764-9h7nx" Jan 17 00:42:09.687142 kubelet[2796]: I0117 00:42:09.684828 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flkx\" (UniqueName: \"kubernetes.io/projected/80e5c881-067d-4192-8764-acc37b9b15b6-kube-api-access-2flkx\") pod \"coredns-668d6bf9bc-mtq9l\" (UID: \"80e5c881-067d-4192-8764-acc37b9b15b6\") " pod="kube-system/coredns-668d6bf9bc-mtq9l" Jan 17 00:42:09.687142 kubelet[2796]: I0117 00:42:09.684858 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e8789b-d87c-447d-950e-1991d31141d1-config\") pod \"goldmane-666569f655-g2n27\" (UID: \"12e8789b-d87c-447d-950e-1991d31141d1\") " pod="calico-system/goldmane-666569f655-g2n27" Jan 17 00:42:09.687142 kubelet[2796]: I0117 00:42:09.684888 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/12e8789b-d87c-447d-950e-1991d31141d1-goldmane-key-pair\") pod \"goldmane-666569f655-g2n27\" (UID: \"12e8789b-d87c-447d-950e-1991d31141d1\") " pod="calico-system/goldmane-666569f655-g2n27" Jan 17 00:42:09.687142 kubelet[2796]: I0117 00:42:09.684917 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nk56\" (UniqueName: \"kubernetes.io/projected/aa612b8d-2f4c-467c-9d4c-78c8e06b8f95-kube-api-access-2nk56\") pod \"calico-apiserver-7b478fd4fd-bk9rz\" (UID: \"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95\") " pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" Jan 17 00:42:09.687142 kubelet[2796]: I0117 00:42:09.684952 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12e8789b-d87c-447d-950e-1991d31141d1-goldmane-ca-bundle\") pod \"goldmane-666569f655-g2n27\" (UID: \"12e8789b-d87c-447d-950e-1991d31141d1\") " pod="calico-system/goldmane-666569f655-g2n27" Jan 17 00:42:09.687490 kubelet[2796]: I0117 00:42:09.684978 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-ca-bundle\") pod \"whisker-6889d59764-9h7nx\" (UID: \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\") " pod="calico-system/whisker-6889d59764-9h7nx" Jan 17 00:42:09.687490 kubelet[2796]: I0117 00:42:09.685054 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-backend-key-pair\") pod \"whisker-6889d59764-9h7nx\" (UID: \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\") " pod="calico-system/whisker-6889d59764-9h7nx" Jan 17 00:42:09.687490 kubelet[2796]: I0117 00:42:09.685085 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcw4d\" (UniqueName: \"kubernetes.io/projected/907968b1-857c-479e-a0ab-2b58db52b182-kube-api-access-fcw4d\") pod \"calico-apiserver-7b478fd4fd-xbslh\" (UID: \"907968b1-857c-479e-a0ab-2b58db52b182\") " pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" Jan 17 00:42:09.888056 containerd[1588]: time="2026-01-17T00:42:09.886952920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744d6dbcbc-9t986,Uid:5e3f00cb-8452-40aa-ab56-8dc0975dc08a,Namespace:calico-system,Attempt:0,}" Jan 17 00:42:09.915134 kubelet[2796]: E0117 00:42:09.912899 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:09.916407 containerd[1588]: time="2026-01-17T00:42:09.914237257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mvkvx,Uid:bfbb8f9c-282b-42d0-90d7-a8ecd35e843f,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:09.920275 containerd[1588]: time="2026-01-17T00:42:09.919282237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-xbslh,Uid:907968b1-857c-479e-a0ab-2b58db52b182,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:42:09.926718 containerd[1588]: time="2026-01-17T00:42:09.923792178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2n27,Uid:12e8789b-d87c-447d-950e-1991d31141d1,Namespace:calico-system,Attempt:0,}" Jan 17 00:42:09.926718 containerd[1588]: time="2026-01-17T00:42:09.925033824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6889d59764-9h7nx,Uid:d71d28bd-74af-4e97-9fb5-9a08939c13d5,Namespace:calico-system,Attempt:0,}" Jan 17 00:42:09.926857 kubelet[2796]: E0117 00:42:09.925799 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:09.931783 containerd[1588]: time="2026-01-17T00:42:09.929031961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtq9l,Uid:80e5c881-067d-4192-8764-acc37b9b15b6,Namespace:kube-system,Attempt:0,}" Jan 17 00:42:09.932274 kubelet[2796]: E0117 00:42:09.930563 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:09.932362 containerd[1588]: time="2026-01-17T00:42:09.932320894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-bk9rz,Uid:aa612b8d-2f4c-467c-9d4c-78c8e06b8f95,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:42:09.989254 containerd[1588]: time="2026-01-17T00:42:09.977413826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:42:10.640963 containerd[1588]: time="2026-01-17T00:42:10.639989222Z" level=error msg="Failed to destroy network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.645014 containerd[1588]: time="2026-01-17T00:42:10.643895349Z" level=error msg="Failed to destroy network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.647626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580-shm.mount: Deactivated successfully. Jan 17 00:42:10.649256 containerd[1588]: time="2026-01-17T00:42:10.649047669Z" level=error msg="encountered an error cleaning up failed sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.652310 containerd[1588]: time="2026-01-17T00:42:10.649296212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2n27,Uid:12e8789b-d87c-447d-950e-1991d31141d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.653314 containerd[1588]: time="2026-01-17T00:42:10.653262590Z" level=error msg="Failed to destroy network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.654643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838-shm.mount: Deactivated successfully. Jan 17 00:42:10.656500 containerd[1588]: time="2026-01-17T00:42:10.656418446Z" level=error msg="encountered an error cleaning up failed sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.656567 containerd[1588]: time="2026-01-17T00:42:10.656507672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-xbslh,Uid:907968b1-857c-479e-a0ab-2b58db52b182,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.661261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924-shm.mount: Deactivated successfully. Jan 17 00:42:10.669408 containerd[1588]: time="2026-01-17T00:42:10.667990519Z" level=error msg="Failed to destroy network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.672709 containerd[1588]: time="2026-01-17T00:42:10.669964651Z" level=error msg="encountered an error cleaning up failed sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.672709 containerd[1588]: time="2026-01-17T00:42:10.670034120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6889d59764-9h7nx,Uid:d71d28bd-74af-4e97-9fb5-9a08939c13d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.672709 containerd[1588]: time="2026-01-17T00:42:10.670167229Z" level=error msg="Failed to destroy network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.672709 containerd[1588]: time="2026-01-17T00:42:10.670792304Z" level=error msg="encountered an error cleaning up failed sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.672709 containerd[1588]: time="2026-01-17T00:42:10.670970848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mvkvx,Uid:bfbb8f9c-282b-42d0-90d7-a8ecd35e843f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.675020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c-shm.mount: Deactivated successfully. Jan 17 00:42:10.688142 containerd[1588]: time="2026-01-17T00:42:10.687921941Z" level=error msg="Failed to destroy network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.693649 containerd[1588]: time="2026-01-17T00:42:10.692798527Z" level=error msg="encountered an error cleaning up failed sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.693649 containerd[1588]: time="2026-01-17T00:42:10.693478045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744d6dbcbc-9t986,Uid:5e3f00cb-8452-40aa-ab56-8dc0975dc08a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.694378 kubelet[2796]: E0117 00:42:10.694333 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.698106 kubelet[2796]: E0117 00:42:10.694522 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" Jan 17 00:42:10.698106 kubelet[2796]: E0117 00:42:10.694559 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" Jan 17 00:42:10.698106 kubelet[2796]: E0117 00:42:10.694671 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-744d6dbcbc-9t986_calico-system(5e3f00cb-8452-40aa-ab56-8dc0975dc08a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-744d6dbcbc-9t986_calico-system(5e3f00cb-8452-40aa-ab56-8dc0975dc08a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:42:10.700003 containerd[1588]: time="2026-01-17T00:42:10.697351088Z" level=error msg="Failed to destroy network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.700062 kubelet[2796]: E0117 00:42:10.695024 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.700062 kubelet[2796]: E0117 00:42:10.695054 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g2n27" Jan 17 00:42:10.700062 kubelet[2796]: E0117 00:42:10.695074 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g2n27" Jan 17 00:42:10.700172 kubelet[2796]: E0117 00:42:10.695106 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g2n27_calico-system(12e8789b-d87c-447d-950e-1991d31141d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g2n27_calico-system(12e8789b-d87c-447d-950e-1991d31141d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:42:10.700172 kubelet[2796]: E0117 00:42:10.695142 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.700172 kubelet[2796]: E0117 00:42:10.695163 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" Jan 17 00:42:10.700955 kubelet[2796]: E0117 00:42:10.695236 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" Jan 17 00:42:10.700955 kubelet[2796]: E0117 00:42:10.695270 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b478fd4fd-xbslh_calico-apiserver(907968b1-857c-479e-a0ab-2b58db52b182)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b478fd4fd-xbslh_calico-apiserver(907968b1-857c-479e-a0ab-2b58db52b182)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:42:10.700955 kubelet[2796]: E0117 00:42:10.695304 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.701402 kubelet[2796]: E0117 00:42:10.695357 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6889d59764-9h7nx" Jan 17 00:42:10.701402 kubelet[2796]: E0117 00:42:10.695376 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6889d59764-9h7nx" Jan 17 00:42:10.701402 kubelet[2796]: E0117 00:42:10.695406 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6889d59764-9h7nx_calico-system(d71d28bd-74af-4e97-9fb5-9a08939c13d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6889d59764-9h7nx_calico-system(d71d28bd-74af-4e97-9fb5-9a08939c13d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6889d59764-9h7nx" podUID="d71d28bd-74af-4e97-9fb5-9a08939c13d5" Jan 17 00:42:10.701759 kubelet[2796]: E0117 00:42:10.695435 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.701759 kubelet[2796]: E0117 00:42:10.695457 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mvkvx" Jan 17 00:42:10.701759 kubelet[2796]: E0117 00:42:10.695473 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mvkvx" Jan 17 00:42:10.701861 kubelet[2796]: E0117 00:42:10.695503 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mvkvx_kube-system(bfbb8f9c-282b-42d0-90d7-a8ecd35e843f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mvkvx_kube-system(bfbb8f9c-282b-42d0-90d7-a8ecd35e843f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mvkvx" podUID="bfbb8f9c-282b-42d0-90d7-a8ecd35e843f" Jan 17 00:42:10.703811 containerd[1588]: time="2026-01-17T00:42:10.702333692Z" level=error msg="encountered an error cleaning up failed sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.703811 containerd[1588]: time="2026-01-17T00:42:10.702399134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtq9l,Uid:80e5c881-067d-4192-8764-acc37b9b15b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.703967 kubelet[2796]: E0117 00:42:10.703862 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.703967 kubelet[2796]: E0117 00:42:10.703922 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtq9l" Jan 17 00:42:10.703967 kubelet[2796]: E0117 00:42:10.703951 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtq9l" Jan 17 00:42:10.704076 kubelet[2796]: E0117 00:42:10.703995 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mtq9l_kube-system(80e5c881-067d-4192-8764-acc37b9b15b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mtq9l_kube-system(80e5c881-067d-4192-8764-acc37b9b15b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mtq9l" podUID="80e5c881-067d-4192-8764-acc37b9b15b6" Jan 17 00:42:10.716446 containerd[1588]: time="2026-01-17T00:42:10.716339503Z" level=error msg="encountered an error cleaning up failed sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.716569 containerd[1588]: time="2026-01-17T00:42:10.716469076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-bk9rz,Uid:aa612b8d-2f4c-467c-9d4c-78c8e06b8f95,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.718717 kubelet[2796]: E0117 00:42:10.718479 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:10.718717 kubelet[2796]: E0117 00:42:10.718612 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" Jan 17 00:42:10.718717 kubelet[2796]: E0117 00:42:10.718642 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" Jan 17 00:42:10.720474 kubelet[2796]: E0117 00:42:10.718717 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b478fd4fd-bk9rz_calico-apiserver(aa612b8d-2f4c-467c-9d4c-78c8e06b8f95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b478fd4fd-bk9rz_calico-apiserver(aa612b8d-2f4c-467c-9d4c-78c8e06b8f95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:42:10.944071 kubelet[2796]: I0117 00:42:10.942558 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:42:10.954036 kubelet[2796]: I0117 00:42:10.954004 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:42:10.964737 kubelet[2796]: I0117 00:42:10.964391 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:42:11.025321 kubelet[2796]: I0117 00:42:11.011254 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.017787109Z" level=info msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\"" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.020848859Z" level=info msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\"" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.028931961Z" level=info msg="Ensure that sandbox 2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c in task-service has been cleanup successfully" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.029702298Z" level=info msg="Ensure that sandbox 9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a in task-service has been cleanup successfully" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.029915084Z" level=info msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\"" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.031326230Z" level=info msg="Ensure that sandbox 6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838 in task-service has been cleanup successfully" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.031890428Z" level=info msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\"" Jan 17 00:42:11.035798 containerd[1588]: time="2026-01-17T00:42:11.034510324Z" level=info msg="Ensure that sandbox 2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340 in task-service has been cleanup successfully" Jan 17 00:42:11.055165 kubelet[2796]: I0117 00:42:11.054982 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:42:11.062725 containerd[1588]: time="2026-01-17T00:42:11.056002074Z" level=info msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\"" Jan 17 00:42:11.062725 containerd[1588]: time="2026-01-17T00:42:11.056492218Z" level=info msg="Ensure that sandbox 18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924 in task-service has been cleanup successfully" Jan 17 00:42:11.071662 kubelet[2796]: I0117 00:42:11.071627 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:42:11.098446 containerd[1588]: time="2026-01-17T00:42:11.089559648Z" level=info msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\"" Jan 17 00:42:11.098446 containerd[1588]: time="2026-01-17T00:42:11.097416356Z" level=info msg="Ensure that sandbox 146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f in task-service has been cleanup successfully" Jan 17 00:42:11.161925 kubelet[2796]: I0117 00:42:11.144802 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:42:11.179822 containerd[1588]: time="2026-01-17T00:42:11.177266985Z" level=info msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\"" Jan 17 00:42:11.179822 containerd[1588]: time="2026-01-17T00:42:11.177819807Z" level=info msg="Ensure that sandbox e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580 in task-service has been cleanup successfully" Jan 17 00:42:11.375143 containerd[1588]: time="2026-01-17T00:42:11.375080041Z" level=error msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" failed" error="failed to destroy network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.377234 kubelet[2796]: E0117 00:42:11.377013 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:42:11.377367 kubelet[2796]: E0117 00:42:11.377145 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924"} Jan 17 00:42:11.377367 kubelet[2796]: E0117 00:42:11.377316 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"907968b1-857c-479e-a0ab-2b58db52b182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:11.377537 kubelet[2796]: E0117 00:42:11.377361 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"907968b1-857c-479e-a0ab-2b58db52b182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:42:11.405690 containerd[1588]: time="2026-01-17T00:42:11.399090587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kh68,Uid:1d6f5cd7-ec64-4020-903c-bd9456eec0b4,Namespace:calico-system,Attempt:0,}" Jan 17 00:42:11.415787 containerd[1588]: time="2026-01-17T00:42:11.415683796Z" level=error msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" failed" error="failed to destroy network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.416622 kubelet[2796]: E0117 00:42:11.416139 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:42:11.416889 kubelet[2796]: E0117 00:42:11.416296 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340"} Jan 17 00:42:11.417239 kubelet[2796]: E0117 00:42:11.417032 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e3f00cb-8452-40aa-ab56-8dc0975dc08a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:11.417661 kubelet[2796]: E0117 00:42:11.417163 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e3f00cb-8452-40aa-ab56-8dc0975dc08a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:42:11.418678 containerd[1588]: time="2026-01-17T00:42:11.417960100Z" level=error msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" failed" error="failed to destroy network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.423463 containerd[1588]: time="2026-01-17T00:42:11.421307444Z" level=error msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" failed" error="failed to destroy network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.423704 kubelet[2796]: E0117 00:42:11.423000 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:42:11.423704 kubelet[2796]: E0117 00:42:11.423080 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f"} Jan 17 00:42:11.423704 kubelet[2796]: E0117 00:42:11.423136 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80e5c881-067d-4192-8764-acc37b9b15b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:11.423704 kubelet[2796]: E0117 00:42:11.423175 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80e5c881-067d-4192-8764-acc37b9b15b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mtq9l" podUID="80e5c881-067d-4192-8764-acc37b9b15b6" Jan 17 00:42:11.424092 kubelet[2796]: E0117 00:42:11.423297 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:42:11.424092 kubelet[2796]: E0117 00:42:11.423326 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c"} Jan 17 00:42:11.424092 kubelet[2796]: E0117 00:42:11.423359 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:11.424092 kubelet[2796]: E0117 00:42:11.423394 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6889d59764-9h7nx" podUID="d71d28bd-74af-4e97-9fb5-9a08939c13d5" Jan 17 00:42:11.426724 containerd[1588]: time="2026-01-17T00:42:11.426504555Z" level=error msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" failed" error="failed to destroy network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.427668 kubelet[2796]: E0117 00:42:11.427492 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:42:11.427668 kubelet[2796]: E0117 00:42:11.427564 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838"} Jan 17 00:42:11.427668 kubelet[2796]: E0117 00:42:11.427658 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:11.427912 kubelet[2796]: E0117 00:42:11.427683 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:42:11.447373 containerd[1588]: time="2026-01-17T00:42:11.447071403Z" level=error msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" failed" error="failed to destroy network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.447652 containerd[1588]: time="2026-01-17T00:42:11.447483461Z" level=error msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" failed" error="failed to destroy network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.448913 kubelet[2796]: E0117 00:42:11.448665 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:42:11.448913 kubelet[2796]: E0117 00:42:11.448762 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a"} Jan 17 00:42:11.448913 kubelet[2796]: E0117 00:42:11.448818 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:11.448913 kubelet[2796]: E0117 00:42:11.448860 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mvkvx" podUID="bfbb8f9c-282b-42d0-90d7-a8ecd35e843f" Jan 17 00:42:11.461377 kubelet[2796]: E0117 00:42:11.459413 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:42:11.461377 kubelet[2796]: E0117 00:42:11.459486 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580"} Jan 17 00:42:11.461377 kubelet[2796]: E0117 00:42:11.459538 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12e8789b-d87c-447d-950e-1991d31141d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:11.461377 kubelet[2796]: E0117 00:42:11.459571 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12e8789b-d87c-447d-950e-1991d31141d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:42:11.489266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f-shm.mount: Deactivated successfully. Jan 17 00:42:11.489553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a-shm.mount: Deactivated successfully. Jan 17 00:42:11.489850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340-shm.mount: Deactivated successfully. Jan 17 00:42:11.713842 containerd[1588]: time="2026-01-17T00:42:11.713360981Z" level=error msg="Failed to destroy network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.723281 containerd[1588]: time="2026-01-17T00:42:11.718810716Z" level=error msg="encountered an error cleaning up failed sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.723281 containerd[1588]: time="2026-01-17T00:42:11.718924238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kh68,Uid:1d6f5cd7-ec64-4020-903c-bd9456eec0b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.723446 kubelet[2796]: E0117 00:42:11.719291 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:11.723446 kubelet[2796]: E0117 00:42:11.719432 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kh68" Jan 17 00:42:11.723446 kubelet[2796]: E0117 00:42:11.719470 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kh68" Jan 17 00:42:11.723671 kubelet[2796]: E0117 00:42:11.719531 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:11.725111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31-shm.mount: Deactivated successfully. Jan 17 00:42:12.175107 kubelet[2796]: I0117 00:42:12.173762 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:42:12.186451 containerd[1588]: time="2026-01-17T00:42:12.186142251Z" level=info msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\"" Jan 17 00:42:12.186904 containerd[1588]: time="2026-01-17T00:42:12.186744152Z" level=info msg="Ensure that sandbox beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31 in task-service has been cleanup successfully" Jan 17 00:42:12.291618 containerd[1588]: time="2026-01-17T00:42:12.291484842Z" level=error msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" failed" error="failed to destroy network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:12.292520 kubelet[2796]: E0117 00:42:12.292282 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:42:12.292520 kubelet[2796]: E0117 00:42:12.292355 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31"} Jan 17 00:42:12.292520 kubelet[2796]: E0117 00:42:12.292401 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:12.292520 kubelet[2796]: E0117 00:42:12.292432 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:15.392704 kubelet[2796]: E0117 00:42:15.392454 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:22.383337 containerd[1588]: time="2026-01-17T00:42:22.383287242Z" level=info msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\"" Jan 17 00:42:22.446063 containerd[1588]: time="2026-01-17T00:42:22.444124065Z" level=error msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" failed" error="failed to destroy network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:22.446300 kubelet[2796]: E0117 00:42:22.446082 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:42:22.446300 kubelet[2796]: E0117 00:42:22.446168 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838"} Jan 17 00:42:22.447433 kubelet[2796]: E0117 00:42:22.446304 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:22.447433 kubelet[2796]: E0117 00:42:22.446337 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:42:24.390738 containerd[1588]: time="2026-01-17T00:42:24.389380781Z" level=info msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\"" Jan 17 00:42:24.511505 containerd[1588]: time="2026-01-17T00:42:24.509747094Z" level=error msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" failed" error="failed to destroy network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:24.511742 kubelet[2796]: E0117 00:42:24.511022 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:42:24.511742 kubelet[2796]: E0117 00:42:24.511100 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580"} Jan 17 00:42:24.511742 kubelet[2796]: E0117 00:42:24.511157 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12e8789b-d87c-447d-950e-1991d31141d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:24.511742 kubelet[2796]: E0117 00:42:24.511261 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12e8789b-d87c-447d-950e-1991d31141d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:42:25.369107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265282724.mount: Deactivated successfully. Jan 17 00:42:25.387008 containerd[1588]: time="2026-01-17T00:42:25.386952766Z" level=info msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\"" Jan 17 00:42:25.392920 containerd[1588]: time="2026-01-17T00:42:25.392862129Z" level=info msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\"" Jan 17 00:42:25.607403 containerd[1588]: time="2026-01-17T00:42:25.605667010Z" level=error msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" failed" error="failed to destroy network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:25.607735 kubelet[2796]: E0117 00:42:25.606350 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:42:25.607735 kubelet[2796]: E0117 00:42:25.606477 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340"} Jan 17 00:42:25.607735 kubelet[2796]: E0117 00:42:25.606545 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e3f00cb-8452-40aa-ab56-8dc0975dc08a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:25.607735 kubelet[2796]: E0117 00:42:25.606629 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e3f00cb-8452-40aa-ab56-8dc0975dc08a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:42:25.612172 kubelet[2796]: E0117 00:42:25.609536 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:42:25.612172 kubelet[2796]: E0117 00:42:25.609641 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31"} Jan 17 00:42:25.612172 kubelet[2796]: E0117 00:42:25.609690 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:25.612172 kubelet[2796]: E0117 00:42:25.609726 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d6f5cd7-ec64-4020-903c-bd9456eec0b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:25.612695 containerd[1588]: time="2026-01-17T00:42:25.609078484Z" level=error msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" failed" error="failed to destroy network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:25.773287 containerd[1588]: time="2026-01-17T00:42:25.771440039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:25.787905 containerd[1588]: time="2026-01-17T00:42:25.787795118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:42:25.826901 containerd[1588]: time="2026-01-17T00:42:25.826288512Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:25.832671 containerd[1588]: time="2026-01-17T00:42:25.832550496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:42:25.837081 containerd[1588]: time="2026-01-17T00:42:25.836944482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 15.859391617s" Jan 17 00:42:25.837081 containerd[1588]: time="2026-01-17T00:42:25.837022276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:42:25.874844 containerd[1588]: time="2026-01-17T00:42:25.874770015Z" level=info msg="CreateContainer within sandbox \"c652be471fbcc7b181c4c952a70efe202c4a5351c3170e7b21849053204f9f9d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:42:25.951896 containerd[1588]: time="2026-01-17T00:42:25.951813270Z" level=info msg="CreateContainer within sandbox \"c652be471fbcc7b181c4c952a70efe202c4a5351c3170e7b21849053204f9f9d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7624f2e0df80dd58c9938d1a589e4d87fb65854e8eb926957f18f034d40f69da\"" Jan 17 00:42:25.954937 containerd[1588]: time="2026-01-17T00:42:25.954327272Z" level=info msg="StartContainer for \"7624f2e0df80dd58c9938d1a589e4d87fb65854e8eb926957f18f034d40f69da\"" Jan 17 00:42:26.188766 containerd[1588]: time="2026-01-17T00:42:26.187433663Z" level=info msg="StartContainer for \"7624f2e0df80dd58c9938d1a589e4d87fb65854e8eb926957f18f034d40f69da\" returns successfully" Jan 17 00:42:26.310903 kubelet[2796]: E0117 00:42:26.307319 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:26.384867 kubelet[2796]: I0117 00:42:26.384325 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bhhdn" podStartSLOduration=2.250562251 podStartE2EDuration="39.38430265s" podCreationTimestamp="2026-01-17 00:41:47 +0000 UTC" firstStartedPulling="2026-01-17 00:41:48.70565094 +0000 UTC m=+46.104496015" lastFinishedPulling="2026-01-17 00:42:25.839391329 +0000 UTC m=+83.238236414" observedRunningTime="2026-01-17 00:42:26.372622823 +0000 UTC m=+83.771467918" watchObservedRunningTime="2026-01-17 00:42:26.38430265 +0000 UTC m=+83.783147725" Jan 17 00:42:26.388547 containerd[1588]: time="2026-01-17T00:42:26.388464233Z" level=info msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\"" Jan 17 00:42:26.391931 containerd[1588]: time="2026-01-17T00:42:26.391745975Z" level=info msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\"" Jan 17 00:42:26.398892 containerd[1588]: time="2026-01-17T00:42:26.398407602Z" level=info msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\"" Jan 17 00:42:26.405165 containerd[1588]: time="2026-01-17T00:42:26.403635820Z" level=info msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\"" Jan 17 00:42:26.567811 containerd[1588]: time="2026-01-17T00:42:26.566419676Z" level=error msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" failed" error="failed to destroy network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:26.567944 kubelet[2796]: E0117 00:42:26.566824 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:42:26.567944 kubelet[2796]: E0117 00:42:26.566916 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f"} Jan 17 00:42:26.567944 kubelet[2796]: E0117 00:42:26.567685 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80e5c881-067d-4192-8764-acc37b9b15b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:26.567944 kubelet[2796]: E0117 00:42:26.567728 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80e5c881-067d-4192-8764-acc37b9b15b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mtq9l" podUID="80e5c881-067d-4192-8764-acc37b9b15b6" Jan 17 00:42:26.588131 containerd[1588]: time="2026-01-17T00:42:26.587893441Z" level=error msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" failed" error="failed to destroy network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:26.590558 kubelet[2796]: E0117 00:42:26.589525 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:42:26.590558 kubelet[2796]: E0117 00:42:26.590049 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c"} Jan 17 00:42:26.590764 kubelet[2796]: E0117 00:42:26.590566 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:26.591136 kubelet[2796]: E0117 00:42:26.590855 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6889d59764-9h7nx" podUID="d71d28bd-74af-4e97-9fb5-9a08939c13d5" Jan 17 00:42:26.619444 containerd[1588]: time="2026-01-17T00:42:26.619368896Z" level=error msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" failed" error="failed to destroy network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:26.621450 kubelet[2796]: E0117 00:42:26.621374 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:42:26.631769 kubelet[2796]: E0117 00:42:26.622112 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a"} Jan 17 00:42:26.631769 kubelet[2796]: E0117 00:42:26.622167 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:26.631769 kubelet[2796]: E0117 00:42:26.622271 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mvkvx" podUID="bfbb8f9c-282b-42d0-90d7-a8ecd35e843f" Jan 17 00:42:26.652893 containerd[1588]: time="2026-01-17T00:42:26.652256911Z" level=error msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" failed" error="failed to destroy network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:42:26.665540 kubelet[2796]: E0117 00:42:26.652644 2796 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:42:26.665540 kubelet[2796]: E0117 00:42:26.652730 2796 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924"} Jan 17 00:42:26.665540 kubelet[2796]: E0117 00:42:26.652789 2796 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"907968b1-857c-479e-a0ab-2b58db52b182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:42:26.665540 kubelet[2796]: E0117 00:42:26.652828 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"907968b1-857c-479e-a0ab-2b58db52b182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:42:26.926992 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:42:26.931258 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:42:27.317465 kubelet[2796]: E0117 00:42:27.315966 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:27.372569 containerd[1588]: time="2026-01-17T00:42:27.372481762Z" level=info msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\"" Jan 17 00:42:27.453696 systemd[1]: run-containerd-runc-k8s.io-7624f2e0df80dd58c9938d1a589e4d87fb65854e8eb926957f18f034d40f69da-runc.oPJAYe.mount: Deactivated successfully. Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:27.687 [INFO][4246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:27.688 [INFO][4246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" iface="eth0" netns="/var/run/netns/cni-b9cd911d-b5ac-f31a-1b1c-8198b3515455" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:27.691 [INFO][4246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" iface="eth0" netns="/var/run/netns/cni-b9cd911d-b5ac-f31a-1b1c-8198b3515455" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:27.693 [INFO][4246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" iface="eth0" netns="/var/run/netns/cni-b9cd911d-b5ac-f31a-1b1c-8198b3515455" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:27.693 [INFO][4246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:27.693 [INFO][4246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:28.121 [INFO][4269] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:28.122 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:28.123 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:28.150 [WARNING][4269] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:28.150 [INFO][4269] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:28.169 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:28.184528 containerd[1588]: 2026-01-17 00:42:28.176 [INFO][4246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:42:28.185799 containerd[1588]: time="2026-01-17T00:42:28.185327187Z" level=info msg="TearDown network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" successfully" Jan 17 00:42:28.185799 containerd[1588]: time="2026-01-17T00:42:28.185364226Z" level=info msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" returns successfully" Jan 17 00:42:28.192071 systemd[1]: run-netns-cni\x2db9cd911d\x2db5ac\x2df31a\x2d1b1c\x2d8198b3515455.mount: Deactivated successfully. Jan 17 00:42:28.336268 kubelet[2796]: I0117 00:42:28.330278 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-backend-key-pair\") pod \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\" (UID: \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\") " Jan 17 00:42:28.336268 kubelet[2796]: I0117 00:42:28.332968 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-ca-bundle\") pod \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\" (UID: \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\") " Jan 17 00:42:28.336268 kubelet[2796]: I0117 00:42:28.333342 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djtqs\" (UniqueName: \"kubernetes.io/projected/d71d28bd-74af-4e97-9fb5-9a08939c13d5-kube-api-access-djtqs\") pod \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\" (UID: \"d71d28bd-74af-4e97-9fb5-9a08939c13d5\") " Jan 17 00:42:28.347358 kubelet[2796]: I0117 00:42:28.345096 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d71d28bd-74af-4e97-9fb5-9a08939c13d5" (UID: "d71d28bd-74af-4e97-9fb5-9a08939c13d5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:42:28.364939 kubelet[2796]: I0117 00:42:28.364879 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d71d28bd-74af-4e97-9fb5-9a08939c13d5-kube-api-access-djtqs" (OuterVolumeSpecName: "kube-api-access-djtqs") pod "d71d28bd-74af-4e97-9fb5-9a08939c13d5" (UID: "d71d28bd-74af-4e97-9fb5-9a08939c13d5"). InnerVolumeSpecName "kube-api-access-djtqs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:42:28.365689 kubelet[2796]: I0117 00:42:28.365428 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d71d28bd-74af-4e97-9fb5-9a08939c13d5" (UID: "d71d28bd-74af-4e97-9fb5-9a08939c13d5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:42:28.367825 systemd[1]: var-lib-kubelet-pods-d71d28bd\x2d74af\x2d4e97\x2d9fb5\x2d9a08939c13d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjtqs.mount: Deactivated successfully. Jan 17 00:42:28.368290 systemd[1]: var-lib-kubelet-pods-d71d28bd\x2d74af\x2d4e97\x2d9fb5\x2d9a08939c13d5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:42:28.434921 kubelet[2796]: I0117 00:42:28.434423 2796 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-djtqs\" (UniqueName: \"kubernetes.io/projected/d71d28bd-74af-4e97-9fb5-9a08939c13d5-kube-api-access-djtqs\") on node \"localhost\" DevicePath \"\"" Jan 17 00:42:28.434921 kubelet[2796]: I0117 00:42:28.434502 2796 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 17 00:42:28.434921 kubelet[2796]: I0117 00:42:28.434521 2796 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d71d28bd-74af-4e97-9fb5-9a08939c13d5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 17 00:42:29.270099 kubelet[2796]: I0117 00:42:29.269987 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63b9bfd8-2242-41c2-9f61-17499f636020-whisker-backend-key-pair\") pod \"whisker-7fb8df59d-7qm96\" (UID: \"63b9bfd8-2242-41c2-9f61-17499f636020\") " pod="calico-system/whisker-7fb8df59d-7qm96" Jan 17 00:42:29.270099 kubelet[2796]: I0117 00:42:29.270069 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63b9bfd8-2242-41c2-9f61-17499f636020-whisker-ca-bundle\") pod \"whisker-7fb8df59d-7qm96\" (UID: \"63b9bfd8-2242-41c2-9f61-17499f636020\") " pod="calico-system/whisker-7fb8df59d-7qm96" Jan 17 00:42:29.270099 kubelet[2796]: I0117 00:42:29.270096 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx296\" (UniqueName: \"kubernetes.io/projected/63b9bfd8-2242-41c2-9f61-17499f636020-kube-api-access-vx296\") pod \"whisker-7fb8df59d-7qm96\" (UID: \"63b9bfd8-2242-41c2-9f61-17499f636020\") " pod="calico-system/whisker-7fb8df59d-7qm96" Jan 17 00:42:29.387092 kubelet[2796]: I0117 00:42:29.386780 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d71d28bd-74af-4e97-9fb5-9a08939c13d5" path="/var/lib/kubelet/pods/d71d28bd-74af-4e97-9fb5-9a08939c13d5/volumes" Jan 17 00:42:29.451930 containerd[1588]: time="2026-01-17T00:42:29.451868724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb8df59d-7qm96,Uid:63b9bfd8-2242-41c2-9f61-17499f636020,Namespace:calico-system,Attempt:0,}" Jan 17 00:42:29.898274 kernel: bpftool[4443]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:42:30.067550 systemd-networkd[1250]: calid3bb374c2ff: Link UP Jan 17 00:42:30.067996 systemd-networkd[1250]: calid3bb374c2ff: Gained carrier Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.730 [INFO][4387] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.780 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7fb8df59d--7qm96-eth0 whisker-7fb8df59d- calico-system 63b9bfd8-2242-41c2-9f61-17499f636020 1073 0 2026-01-17 00:42:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7fb8df59d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7fb8df59d-7qm96 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid3bb374c2ff [] [] }} ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.781 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.876 [INFO][4405] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" HandleID="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Workload="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.880 [INFO][4405] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" HandleID="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Workload="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7fb8df59d-7qm96", "timestamp":"2026-01-17 00:42:29.876454076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.880 [INFO][4405] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.880 [INFO][4405] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.880 [INFO][4405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.907 [INFO][4405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.922 [INFO][4405] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.941 [INFO][4405] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.948 [INFO][4405] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.961 [INFO][4405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.961 [INFO][4405] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.965 [INFO][4405] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4 Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.975 [INFO][4405] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.998 [INFO][4405] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.998 [INFO][4405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" host="localhost" Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.998 [INFO][4405] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:30.124771 containerd[1588]: 2026-01-17 00:42:29.998 [INFO][4405] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" HandleID="k8s-pod-network.bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Workload="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" Jan 17 00:42:30.133340 containerd[1588]: 2026-01-17 00:42:30.013 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7fb8df59d--7qm96-eth0", GenerateName:"whisker-7fb8df59d-", Namespace:"calico-system", SelfLink:"", UID:"63b9bfd8-2242-41c2-9f61-17499f636020", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb8df59d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7fb8df59d-7qm96", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid3bb374c2ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:30.133340 containerd[1588]: 2026-01-17 00:42:30.014 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" Jan 17 00:42:30.133340 containerd[1588]: 2026-01-17 00:42:30.014 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3bb374c2ff ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" Jan 17 00:42:30.133340 containerd[1588]: 2026-01-17 00:42:30.073 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" Jan 17 00:42:30.133340 containerd[1588]: 2026-01-17 00:42:30.073 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7fb8df59d--7qm96-eth0", GenerateName:"whisker-7fb8df59d-", Namespace:"calico-system", SelfLink:"", UID:"63b9bfd8-2242-41c2-9f61-17499f636020", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb8df59d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4", Pod:"whisker-7fb8df59d-7qm96", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid3bb374c2ff", MAC:"52:86:22:5d:70:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:30.133340 containerd[1588]: 2026-01-17 00:42:30.111 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4" Namespace="calico-system" Pod="whisker-7fb8df59d-7qm96" WorkloadEndpoint="localhost-k8s-whisker--7fb8df59d--7qm96-eth0" Jan 17 00:42:30.292308 containerd[1588]: time="2026-01-17T00:42:30.278152868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:30.292308 containerd[1588]: time="2026-01-17T00:42:30.278422401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:30.292308 containerd[1588]: time="2026-01-17T00:42:30.278438712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:30.292308 containerd[1588]: time="2026-01-17T00:42:30.278692715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:30.414495 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:30.606313 containerd[1588]: time="2026-01-17T00:42:30.606084211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb8df59d-7qm96,Uid:63b9bfd8-2242-41c2-9f61-17499f636020,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb105a1905e7929a539f5d2f1a5642b597d7924de93ae47dd10acfe50d85fad4\"" Jan 17 00:42:30.634734 containerd[1588]: time="2026-01-17T00:42:30.631538904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:42:30.743804 containerd[1588]: time="2026-01-17T00:42:30.741509871Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:30.760829 containerd[1588]: time="2026-01-17T00:42:30.759469715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:42:30.784474 containerd[1588]: time="2026-01-17T00:42:30.759565874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:42:30.786285 kubelet[2796]: E0117 00:42:30.785287 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:42:30.786285 kubelet[2796]: E0117 00:42:30.785374 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:42:30.787026 kubelet[2796]: E0117 00:42:30.785609 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f6b2884151de4457ac6d07787b37b4d8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:30.789849 containerd[1588]: time="2026-01-17T00:42:30.789812638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:42:30.936934 containerd[1588]: time="2026-01-17T00:42:30.936308050Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:30.951278 containerd[1588]: time="2026-01-17T00:42:30.950788614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:42:30.951278 containerd[1588]: time="2026-01-17T00:42:30.950936850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:42:30.955497 kubelet[2796]: E0117 00:42:30.954130 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:42:30.955497 kubelet[2796]: E0117 00:42:30.954311 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:42:30.955890 kubelet[2796]: E0117 00:42:30.954474 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:30.956448 kubelet[2796]: E0117 00:42:30.956378 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:42:31.150106 systemd-networkd[1250]: vxlan.calico: Link UP Jan 17 00:42:31.150117 systemd-networkd[1250]: vxlan.calico: Gained carrier Jan 17 00:42:31.378715 kubelet[2796]: E0117 00:42:31.378661 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:42:31.659854 systemd-networkd[1250]: calid3bb374c2ff: Gained IPv6LL Jan 17 00:42:32.387519 kubelet[2796]: E0117 00:42:32.387119 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:42:33.014914 systemd-networkd[1250]: vxlan.calico: Gained IPv6LL Jan 17 00:42:36.391330 containerd[1588]: time="2026-01-17T00:42:36.388964124Z" level=info msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\"" Jan 17 00:42:36.391330 containerd[1588]: time="2026-01-17T00:42:36.390029210Z" level=info msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\"" Jan 17 00:42:37.846127 kubelet[2796]: E0117 00:42:37.844309 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.129 [INFO][4593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.129 [INFO][4593] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" iface="eth0" netns="/var/run/netns/cni-3f9c47a9-bce2-fa0b-c56d-6a38c038104a" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.130 [INFO][4593] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" iface="eth0" netns="/var/run/netns/cni-3f9c47a9-bce2-fa0b-c56d-6a38c038104a" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.133 [INFO][4593] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" iface="eth0" netns="/var/run/netns/cni-3f9c47a9-bce2-fa0b-c56d-6a38c038104a" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.133 [INFO][4593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.133 [INFO][4593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.295 [INFO][4618] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.298 [INFO][4618] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.298 [INFO][4618] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.309 [WARNING][4618] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.309 [INFO][4618] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.320 [INFO][4618] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:38.331158 containerd[1588]: 2026-01-17 00:42:38.324 [INFO][4593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:42:38.343567 containerd[1588]: time="2026-01-17T00:42:38.343467171Z" level=info msg="TearDown network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" successfully" Jan 17 00:42:38.343567 containerd[1588]: time="2026-01-17T00:42:38.343546038Z" level=info msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" returns successfully" Jan 17 00:42:38.345535 systemd[1]: run-netns-cni\x2d3f9c47a9\x2dbce2\x2dfa0b\x2dc56d\x2d6a38c038104a.mount: Deactivated successfully. Jan 17 00:42:38.355060 containerd[1588]: time="2026-01-17T00:42:38.354792961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-bk9rz,Uid:aa612b8d-2f4c-467c-9d4c-78c8e06b8f95,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.121 [INFO][4588] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.131 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" iface="eth0" netns="/var/run/netns/cni-90fdf1e4-bbb1-bcb1-affe-5de6ff52c13a" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.132 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" iface="eth0" netns="/var/run/netns/cni-90fdf1e4-bbb1-bcb1-affe-5de6ff52c13a" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.133 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" iface="eth0" netns="/var/run/netns/cni-90fdf1e4-bbb1-bcb1-affe-5de6ff52c13a" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.133 [INFO][4588] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.133 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.326 [INFO][4620] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.327 [INFO][4620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.328 [INFO][4620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.366 [WARNING][4620] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.367 [INFO][4620] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.380 [INFO][4620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:38.408945 containerd[1588]: 2026-01-17 00:42:38.390 [INFO][4588] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:42:38.415370 containerd[1588]: time="2026-01-17T00:42:38.410747232Z" level=info msg="TearDown network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" successfully" Jan 17 00:42:38.415370 containerd[1588]: time="2026-01-17T00:42:38.410791525Z" level=info msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" returns successfully" Jan 17 00:42:38.415370 containerd[1588]: time="2026-01-17T00:42:38.412384998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744d6dbcbc-9t986,Uid:5e3f00cb-8452-40aa-ab56-8dc0975dc08a,Namespace:calico-system,Attempt:1,}" Jan 17 00:42:38.418320 systemd[1]: run-netns-cni\x2d90fdf1e4\x2dbbb1\x2dbcb1\x2daffe\x2d5de6ff52c13a.mount: Deactivated successfully. Jan 17 00:42:39.156118 systemd-networkd[1250]: calif8f2012f7d5: Link UP Jan 17 00:42:39.175147 systemd-networkd[1250]: calif8f2012f7d5: Gained carrier Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.613 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0 calico-apiserver-7b478fd4fd- calico-apiserver aa612b8d-2f4c-467c-9d4c-78c8e06b8f95 1118 0 2026-01-17 00:41:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b478fd4fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b478fd4fd-bk9rz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8f2012f7d5 [] [] }} ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.614 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.826 [INFO][4660] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" HandleID="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.827 [INFO][4660] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" HandleID="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315b60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b478fd4fd-bk9rz", "timestamp":"2026-01-17 00:42:38.826973004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.827 [INFO][4660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.827 [INFO][4660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.827 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.888 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:38.989 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.024 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.034 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.080 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.080 [INFO][4660] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.090 [INFO][4660] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466 Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.108 [INFO][4660] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.136 [INFO][4660] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.136 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" host="localhost" Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.136 [INFO][4660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:39.225721 containerd[1588]: 2026-01-17 00:42:39.136 [INFO][4660] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" HandleID="k8s-pod-network.38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:39.235360 containerd[1588]: 2026-01-17 00:42:39.147 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b478fd4fd-bk9rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8f2012f7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:39.235360 containerd[1588]: 2026-01-17 00:42:39.147 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:39.235360 containerd[1588]: 2026-01-17 00:42:39.147 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8f2012f7d5 ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:39.235360 containerd[1588]: 2026-01-17 00:42:39.178 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:39.235360 containerd[1588]: 2026-01-17 00:42:39.179 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466", Pod:"calico-apiserver-7b478fd4fd-bk9rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8f2012f7d5", MAC:"56:49:3d:c2:8e:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:39.235360 containerd[1588]: 2026-01-17 00:42:39.208 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-bk9rz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:42:39.387258 containerd[1588]: time="2026-01-17T00:42:39.385969756Z" level=info msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\"" Jan 17 00:42:39.387258 containerd[1588]: time="2026-01-17T00:42:39.386867240Z" level=info msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\"" Jan 17 00:42:39.406363 containerd[1588]: time="2026-01-17T00:42:39.405508297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:39.406363 containerd[1588]: time="2026-01-17T00:42:39.405989686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:39.406363 containerd[1588]: time="2026-01-17T00:42:39.406104018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:39.450485 containerd[1588]: time="2026-01-17T00:42:39.406305524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:39.460022 systemd-networkd[1250]: cali5862449bb2d: Link UP Jan 17 00:42:39.498829 containerd[1588]: time="2026-01-17T00:42:39.496834505Z" level=info msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\"" Jan 17 00:42:39.499328 systemd-networkd[1250]: cali5862449bb2d: Gained carrier Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:38.819 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0 calico-kube-controllers-744d6dbcbc- calico-system 5e3f00cb-8452-40aa-ab56-8dc0975dc08a 1117 0 2026-01-17 00:41:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:744d6dbcbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-744d6dbcbc-9t986 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5862449bb2d [] [] }} ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:38.820 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.032 [INFO][4671] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" HandleID="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.033 [INFO][4671] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" HandleID="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-744d6dbcbc-9t986", "timestamp":"2026-01-17 00:42:39.032429714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.033 [INFO][4671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.137 [INFO][4671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.137 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.183 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.233 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.271 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.289 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.306 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.308 [INFO][4671] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.319 [INFO][4671] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.355 [INFO][4671] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.412 [INFO][4671] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.427 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" host="localhost" Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.427 [INFO][4671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:39.698437 containerd[1588]: 2026-01-17 00:42:39.427 [INFO][4671] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" HandleID="k8s-pod-network.4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:39.699481 containerd[1588]: 2026-01-17 00:42:39.433 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0", GenerateName:"calico-kube-controllers-744d6dbcbc-", Namespace:"calico-system", SelfLink:"", UID:"5e3f00cb-8452-40aa-ab56-8dc0975dc08a", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744d6dbcbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-744d6dbcbc-9t986", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5862449bb2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:39.699481 containerd[1588]: 2026-01-17 00:42:39.434 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:39.699481 containerd[1588]: 2026-01-17 00:42:39.434 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5862449bb2d ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:39.699481 containerd[1588]: 2026-01-17 00:42:39.486 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:39.699481 containerd[1588]: 2026-01-17 00:42:39.494 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0", GenerateName:"calico-kube-controllers-744d6dbcbc-", Namespace:"calico-system", SelfLink:"", UID:"5e3f00cb-8452-40aa-ab56-8dc0975dc08a", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744d6dbcbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c", Pod:"calico-kube-controllers-744d6dbcbc-9t986", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5862449bb2d", MAC:"8e:5e:8f:2c:6a:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:39.699481 containerd[1588]: 2026-01-17 00:42:39.618 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c" Namespace="calico-system" Pod="calico-kube-controllers-744d6dbcbc-9t986" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:42:40.015797 containerd[1588]: time="2026-01-17T00:42:40.006005880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:40.015797 containerd[1588]: time="2026-01-17T00:42:40.006081761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:40.015797 containerd[1588]: time="2026-01-17T00:42:40.006117778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:40.015797 containerd[1588]: time="2026-01-17T00:42:40.006423238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:40.115828 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:40.397727 kubelet[2796]: E0117 00:42:40.391139 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:40.398574 containerd[1588]: time="2026-01-17T00:42:40.395504585Z" level=info msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\"" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:39.954 [INFO][4743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:39.981 [INFO][4743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" iface="eth0" netns="/var/run/netns/cni-b39c833c-3039-5613-303c-d9d66545b63e" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:39.982 [INFO][4743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" iface="eth0" netns="/var/run/netns/cni-b39c833c-3039-5613-303c-d9d66545b63e" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:39.983 [INFO][4743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" iface="eth0" netns="/var/run/netns/cni-b39c833c-3039-5613-303c-d9d66545b63e" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:39.983 [INFO][4743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:39.983 [INFO][4743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:40.287 [INFO][4804] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:40.288 [INFO][4804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:40.288 [INFO][4804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:40.322 [WARNING][4804] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:40.323 [INFO][4804] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:40.345 [INFO][4804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:40.419266 containerd[1588]: 2026-01-17 00:42:40.357 [INFO][4743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:42:40.432378 systemd[1]: run-netns-cni\x2db39c833c\x2d3039\x2d5613\x2d303c\x2dd9d66545b63e.mount: Deactivated successfully. Jan 17 00:42:40.454654 containerd[1588]: time="2026-01-17T00:42:40.451410455Z" level=info msg="TearDown network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" successfully" Jan 17 00:42:40.454654 containerd[1588]: time="2026-01-17T00:42:40.451514378Z" level=info msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" returns successfully" Jan 17 00:42:40.476622 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:40.479374 containerd[1588]: time="2026-01-17T00:42:40.476796744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-xbslh,Uid:907968b1-857c-479e-a0ab-2b58db52b182,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:42:40.491907 containerd[1588]: time="2026-01-17T00:42:40.488251597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-bk9rz,Uid:aa612b8d-2f4c-467c-9d4c-78c8e06b8f95,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466\"" Jan 17 00:42:40.523285 containerd[1588]: time="2026-01-17T00:42:40.517247630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.020 [INFO][4728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.020 [INFO][4728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" iface="eth0" netns="/var/run/netns/cni-81501b3f-007d-c97f-4238-30bbd51e9951" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.020 [INFO][4728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" iface="eth0" netns="/var/run/netns/cni-81501b3f-007d-c97f-4238-30bbd51e9951" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.020 [INFO][4728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" iface="eth0" netns="/var/run/netns/cni-81501b3f-007d-c97f-4238-30bbd51e9951" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.021 [INFO][4728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.021 [INFO][4728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.326 [INFO][4806] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.327 [INFO][4806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.345 [INFO][4806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.410 [WARNING][4806] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.414 [INFO][4806] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.515 [INFO][4806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:40.590726 containerd[1588]: 2026-01-17 00:42:40.545 [INFO][4728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:42:40.606688 containerd[1588]: time="2026-01-17T00:42:40.606164249Z" level=info msg="TearDown network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" successfully" Jan 17 00:42:40.609268 containerd[1588]: time="2026-01-17T00:42:40.607549223Z" level=info msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" returns successfully" Jan 17 00:42:40.608982 systemd[1]: run-netns-cni\x2d81501b3f\x2d007d\x2dc97f\x2d4238\x2d30bbd51e9951.mount: Deactivated successfully. Jan 17 00:42:40.621575 containerd[1588]: time="2026-01-17T00:42:40.614495642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2n27,Uid:12e8789b-d87c-447d-950e-1991d31141d1,Namespace:calico-system,Attempt:1,}" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.253 [INFO][4762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.254 [INFO][4762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" iface="eth0" netns="/var/run/netns/cni-505ada24-48b5-6e89-2f3d-1907b490f7a4" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.255 [INFO][4762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" iface="eth0" netns="/var/run/netns/cni-505ada24-48b5-6e89-2f3d-1907b490f7a4" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.272 [INFO][4762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" iface="eth0" netns="/var/run/netns/cni-505ada24-48b5-6e89-2f3d-1907b490f7a4" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.279 [INFO][4762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.279 [INFO][4762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.483 [INFO][4834] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.484 [INFO][4834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.516 [INFO][4834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.574 [WARNING][4834] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.574 [INFO][4834] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.621 [INFO][4834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:40.715798 containerd[1588]: 2026-01-17 00:42:40.685 [INFO][4762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:42:40.721306 systemd[1]: run-netns-cni\x2d505ada24\x2d48b5\x2d6e89\x2d2f3d\x2d1907b490f7a4.mount: Deactivated successfully. Jan 17 00:42:40.734010 containerd[1588]: time="2026-01-17T00:42:40.715339160Z" level=info msg="TearDown network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" successfully" Jan 17 00:42:40.734010 containerd[1588]: time="2026-01-17T00:42:40.728037262Z" level=info msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" returns successfully" Jan 17 00:42:40.734010 containerd[1588]: time="2026-01-17T00:42:40.729673315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kh68,Uid:1d6f5cd7-ec64-4020-903c-bd9456eec0b4,Namespace:calico-system,Attempt:1,}" Jan 17 00:42:40.747747 containerd[1588]: time="2026-01-17T00:42:40.746653834Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:40.759045 containerd[1588]: time="2026-01-17T00:42:40.754398272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:42:40.759045 containerd[1588]: time="2026-01-17T00:42:40.755919523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:42:40.780726 kubelet[2796]: E0117 00:42:40.761359 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:40.780726 kubelet[2796]: E0117 00:42:40.761427 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:40.780726 kubelet[2796]: E0117 00:42:40.761805 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nk56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-bk9rz_calico-apiserver(aa612b8d-2f4c-467c-9d4c-78c8e06b8f95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:40.780726 kubelet[2796]: E0117 00:42:40.772421 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:42:40.829100 containerd[1588]: time="2026-01-17T00:42:40.828633590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744d6dbcbc-9t986,Uid:5e3f00cb-8452-40aa-ab56-8dc0975dc08a,Namespace:calico-system,Attempt:1,} returns sandbox id \"4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c\"" Jan 17 00:42:40.835879 containerd[1588]: time="2026-01-17T00:42:40.835833798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:42:40.889820 systemd-networkd[1250]: calif8f2012f7d5: Gained IPv6LL Jan 17 00:42:41.095982 containerd[1588]: time="2026-01-17T00:42:41.087559838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:41.109768 kubelet[2796]: E0117 00:42:41.109715 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:42:41.163361 containerd[1588]: time="2026-01-17T00:42:41.163098588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:42:41.181140 containerd[1588]: time="2026-01-17T00:42:41.175972262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:42:41.181974 kubelet[2796]: E0117 00:42:41.181910 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:42:41.182110 kubelet[2796]: E0117 00:42:41.182079 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:42:41.183116 kubelet[2796]: E0117 00:42:41.183050 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjs9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-744d6dbcbc-9t986_calico-system(5e3f00cb-8452-40aa-ab56-8dc0975dc08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:41.190707 kubelet[2796]: E0117 00:42:41.184550 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:42:41.324502 systemd-networkd[1250]: cali5862449bb2d: Gained IPv6LL Jan 17 00:42:41.404860 containerd[1588]: time="2026-01-17T00:42:41.403871960Z" level=info msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\"" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:40.948 [INFO][4867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:40.948 [INFO][4867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" iface="eth0" netns="/var/run/netns/cni-4c07b6bb-02f5-df06-809f-085f6dfb8287" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:40.950 [INFO][4867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" iface="eth0" netns="/var/run/netns/cni-4c07b6bb-02f5-df06-809f-085f6dfb8287" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:40.952 [INFO][4867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" iface="eth0" netns="/var/run/netns/cni-4c07b6bb-02f5-df06-809f-085f6dfb8287" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:40.952 [INFO][4867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:40.953 [INFO][4867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:41.524 [INFO][4921] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:41.528 [INFO][4921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:41.535 [INFO][4921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:41.618 [WARNING][4921] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:41.618 [INFO][4921] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:41.636 [INFO][4921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:41.681730 containerd[1588]: 2026-01-17 00:42:41.644 [INFO][4867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:42:41.704390 systemd[1]: run-netns-cni\x2d4c07b6bb\x2d02f5\x2ddf06\x2d809f\x2d085f6dfb8287.mount: Deactivated successfully. Jan 17 00:42:41.720089 containerd[1588]: time="2026-01-17T00:42:41.718350008Z" level=info msg="TearDown network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" successfully" Jan 17 00:42:41.720350 containerd[1588]: time="2026-01-17T00:42:41.720152842Z" level=info msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" returns successfully" Jan 17 00:42:41.721911 kubelet[2796]: E0117 00:42:41.721037 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:41.727067 containerd[1588]: time="2026-01-17T00:42:41.726953063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtq9l,Uid:80e5c881-067d-4192-8764-acc37b9b15b6,Namespace:kube-system,Attempt:1,}" Jan 17 00:42:42.145746 kubelet[2796]: E0117 00:42:42.145667 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:42:42.145746 kubelet[2796]: E0117 00:42:42.144905 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:42:42.207471 systemd-networkd[1250]: cali01c6e9d7241: Link UP Jan 17 00:42:42.228930 systemd-networkd[1250]: cali01c6e9d7241: Gained carrier Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.325 [INFO][4873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0 calico-apiserver-7b478fd4fd- calico-apiserver 907968b1-857c-479e-a0ab-2b58db52b182 1132 0 2026-01-17 00:41:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b478fd4fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b478fd4fd-xbslh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali01c6e9d7241 [] [] }} ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.325 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.777 [INFO][4953] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" HandleID="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.778 [INFO][4953] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" HandleID="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b478fd4fd-xbslh", "timestamp":"2026-01-17 00:42:41.7778465 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.778 [INFO][4953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.778 [INFO][4953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.778 [INFO][4953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.846 [INFO][4953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.879 [INFO][4953] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.949 [INFO][4953] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:41.981 [INFO][4953] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.024 [INFO][4953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.024 [INFO][4953] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.035 [INFO][4953] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.093 [INFO][4953] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.129 [INFO][4953] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.130 [INFO][4953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" host="localhost" Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.130 [INFO][4953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:42.439158 containerd[1588]: 2026-01-17 00:42:42.131 [INFO][4953] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" HandleID="k8s-pod-network.1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:42.447570 containerd[1588]: 2026-01-17 00:42:42.191 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"907968b1-857c-479e-a0ab-2b58db52b182", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b478fd4fd-xbslh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01c6e9d7241", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:42.447570 containerd[1588]: 2026-01-17 00:42:42.191 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:42.447570 containerd[1588]: 2026-01-17 00:42:42.191 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01c6e9d7241 ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:42.447570 containerd[1588]: 2026-01-17 00:42:42.233 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:42.447570 containerd[1588]: 2026-01-17 00:42:42.250 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"907968b1-857c-479e-a0ab-2b58db52b182", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a", Pod:"calico-apiserver-7b478fd4fd-xbslh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01c6e9d7241", MAC:"0e:08:71:29:ef:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:42.447570 containerd[1588]: 2026-01-17 00:42:42.360 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a" Namespace="calico-apiserver" Pod="calico-apiserver-7b478fd4fd-xbslh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:42:42.635957 containerd[1588]: time="2026-01-17T00:42:42.634762281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:42.636159 containerd[1588]: time="2026-01-17T00:42:42.635141067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:42.636159 containerd[1588]: time="2026-01-17T00:42:42.635393499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:42.636159 containerd[1588]: time="2026-01-17T00:42:42.635845181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:42.701768 systemd-networkd[1250]: cali35ac3e1189c: Link UP Jan 17 00:42:42.720854 systemd-networkd[1250]: cali35ac3e1189c: Gained carrier Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.661 [INFO][4943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.670 [INFO][4943] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" iface="eth0" netns="/var/run/netns/cni-f02f2e0a-de73-62d9-5665-c7935977dae6" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.674 [INFO][4943] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" iface="eth0" netns="/var/run/netns/cni-f02f2e0a-de73-62d9-5665-c7935977dae6" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.674 [INFO][4943] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" iface="eth0" netns="/var/run/netns/cni-f02f2e0a-de73-62d9-5665-c7935977dae6" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.676 [INFO][4943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.676 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.813 [INFO][4978] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:41.814 [INFO][4978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:42.632 [INFO][4978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:42.744 [WARNING][4978] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:42.744 [INFO][4978] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:42.780 [INFO][4978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:42.844664 containerd[1588]: 2026-01-17 00:42:42.838 [INFO][4943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:42:42.872711 containerd[1588]: time="2026-01-17T00:42:42.872647021Z" level=info msg="TearDown network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" successfully" Jan 17 00:42:42.872901 containerd[1588]: time="2026-01-17T00:42:42.872879987Z" level=info msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" returns successfully" Jan 17 00:42:42.873720 kubelet[2796]: E0117 00:42:42.873567 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:42.880929 containerd[1588]: time="2026-01-17T00:42:42.880550205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mvkvx,Uid:bfbb8f9c-282b-42d0-90d7-a8ecd35e843f,Namespace:kube-system,Attempt:1,}" Jan 17 00:42:42.891694 systemd[1]: run-netns-cni\x2df02f2e0a\x2dde73\x2d62d9\x2d5665\x2dc7935977dae6.mount: Deactivated successfully. Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:41.287 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--g2n27-eth0 goldmane-666569f655- calico-system 12e8789b-d87c-447d-950e-1991d31141d1 1133 0 2026-01-17 00:41:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-g2n27 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali35ac3e1189c [] [] }} ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:41.288 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:41.791 [INFO][4954] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" HandleID="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:41.792 [INFO][4954] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" HandleID="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-g2n27", "timestamp":"2026-01-17 00:42:41.791694255 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:41.793 [INFO][4954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.131 [INFO][4954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.133 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.238 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.327 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.434 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.449 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.460 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.460 [INFO][4954] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.491 [INFO][4954] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148 Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.556 [INFO][4954] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.617 [INFO][4954] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.617 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" host="localhost" Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.618 [INFO][4954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:42.998323 containerd[1588]: 2026-01-17 00:42:42.618 [INFO][4954] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" HandleID="k8s-pod-network.c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:43.005858 containerd[1588]: 2026-01-17 00:42:42.653 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g2n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"12e8789b-d87c-447d-950e-1991d31141d1", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-g2n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali35ac3e1189c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:43.005858 containerd[1588]: 2026-01-17 00:42:42.653 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:43.005858 containerd[1588]: 2026-01-17 00:42:42.653 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35ac3e1189c ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:43.005858 containerd[1588]: 2026-01-17 00:42:42.795 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:43.005858 containerd[1588]: 2026-01-17 00:42:42.826 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g2n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"12e8789b-d87c-447d-950e-1991d31141d1", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148", Pod:"goldmane-666569f655-g2n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali35ac3e1189c", MAC:"92:97:44:96:58:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:43.005858 containerd[1588]: 2026-01-17 00:42:42.923 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148" Namespace="calico-system" Pod="goldmane-666569f655-g2n27" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:42:43.091422 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:43.720914 systemd-networkd[1250]: cali01c6e9d7241: Gained IPv6LL Jan 17 00:42:43.974880 containerd[1588]: time="2026-01-17T00:42:43.974371001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b478fd4fd-xbslh,Uid:907968b1-857c-479e-a0ab-2b58db52b182,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a\"" Jan 17 00:42:43.989420 containerd[1588]: time="2026-01-17T00:42:43.989152788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:42:44.100957 containerd[1588]: time="2026-01-17T00:42:44.091165007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:44.100957 containerd[1588]: time="2026-01-17T00:42:44.091308515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:44.100957 containerd[1588]: time="2026-01-17T00:42:44.091327630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:44.100957 containerd[1588]: time="2026-01-17T00:42:44.091547691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:44.250887 systemd-networkd[1250]: cali11986ffb675: Link UP Jan 17 00:42:44.272036 systemd-networkd[1250]: cali11986ffb675: Gained carrier Jan 17 00:42:44.341965 containerd[1588]: time="2026-01-17T00:42:44.338525112Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:41.584 [INFO][4909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7kh68-eth0 csi-node-driver- calico-system 1d6f5cd7-ec64-4020-903c-bd9456eec0b4 1134 0 2026-01-17 00:41:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7kh68 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali11986ffb675 [] [] }} ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:41.588 [INFO][4909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:41.837 [INFO][4971] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" HandleID="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:41.839 [INFO][4971] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" HandleID="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c73a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7kh68", "timestamp":"2026-01-17 00:42:41.8377914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:41.839 [INFO][4971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:42.804 [INFO][4971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:42.808 [INFO][4971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:42.912 [INFO][4971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:43.086 [INFO][4971] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:43.609 [INFO][4971] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:43.750 [INFO][4971] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:43.874 [INFO][4971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:43.874 [INFO][4971] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:43.879 [INFO][4971] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312 Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:43.937 [INFO][4971] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:44.009 [INFO][4971] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:44.009 [INFO][4971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" host="localhost" Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:44.009 [INFO][4971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:44.349495 containerd[1588]: 2026-01-17 00:42:44.009 [INFO][4971] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" HandleID="k8s-pod-network.f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:44.353305 containerd[1588]: 2026-01-17 00:42:44.085 [INFO][4909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kh68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d6f5cd7-ec64-4020-903c-bd9456eec0b4", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7kh68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali11986ffb675", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:44.353305 containerd[1588]: 2026-01-17 00:42:44.101 [INFO][4909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:44.353305 containerd[1588]: 2026-01-17 00:42:44.102 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11986ffb675 ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:44.353305 containerd[1588]: 2026-01-17 00:42:44.268 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:44.353305 containerd[1588]: 2026-01-17 00:42:44.270 [INFO][4909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kh68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d6f5cd7-ec64-4020-903c-bd9456eec0b4", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312", Pod:"csi-node-driver-7kh68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali11986ffb675", MAC:"8a:25:8f:20:49:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:44.353305 containerd[1588]: 2026-01-17 00:42:44.322 [INFO][4909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312" Namespace="calico-system" Pod="csi-node-driver-7kh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:42:44.385918 containerd[1588]: time="2026-01-17T00:42:44.377458899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:42:44.385918 containerd[1588]: time="2026-01-17T00:42:44.377704036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:42:44.386104 kubelet[2796]: E0117 00:42:44.379834 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:44.386104 kubelet[2796]: E0117 00:42:44.379893 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:44.386104 kubelet[2796]: E0117 00:42:44.380040 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-xbslh_calico-apiserver(907968b1-857c-479e-a0ab-2b58db52b182): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:44.390925 kubelet[2796]: E0117 00:42:44.390795 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:42:44.424171 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:44.457031 systemd-networkd[1250]: cali3084967ef48: Link UP Jan 17 00:42:44.462121 systemd-networkd[1250]: cali3084967ef48: Gained carrier Jan 17 00:42:44.532554 systemd-networkd[1250]: cali35ac3e1189c: Gained IPv6LL Jan 17 00:42:44.586566 containerd[1588]: time="2026-01-17T00:42:44.585999231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:44.586566 containerd[1588]: time="2026-01-17T00:42:44.586136548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:44.586566 containerd[1588]: time="2026-01-17T00:42:44.586158639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:44.586566 containerd[1588]: time="2026-01-17T00:42:44.586411061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:42.294 [INFO][4990] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0 coredns-668d6bf9bc- kube-system 80e5c881-067d-4192-8764-acc37b9b15b6 1141 0 2026-01-17 00:41:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mtq9l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3084967ef48 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:42.308 [INFO][4990] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:42.539 [INFO][5012] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" HandleID="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:42.544 [INFO][5012] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" HandleID="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6f80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mtq9l", "timestamp":"2026-01-17 00:42:42.5396971 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:42.546 [INFO][5012] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.010 [INFO][5012] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.010 [INFO][5012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.086 [INFO][5012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.143 [INFO][5012] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.211 [INFO][5012] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.269 [INFO][5012] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.291 [INFO][5012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.291 [INFO][5012] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.324 [INFO][5012] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.366 [INFO][5012] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.417 [INFO][5012] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.417 [INFO][5012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" host="localhost" Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.417 [INFO][5012] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:44.588019 containerd[1588]: 2026-01-17 00:42:44.417 [INFO][5012] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" HandleID="k8s-pod-network.d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:44.589241 containerd[1588]: 2026-01-17 00:42:44.428 [INFO][4990] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"80e5c881-067d-4192-8764-acc37b9b15b6", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mtq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3084967ef48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:44.589241 containerd[1588]: 2026-01-17 00:42:44.429 [INFO][4990] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:44.589241 containerd[1588]: 2026-01-17 00:42:44.429 [INFO][4990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3084967ef48 ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:44.589241 containerd[1588]: 2026-01-17 00:42:44.487 [INFO][4990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:44.589241 containerd[1588]: 2026-01-17 00:42:44.496 [INFO][4990] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"80e5c881-067d-4192-8764-acc37b9b15b6", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d", Pod:"coredns-668d6bf9bc-mtq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3084967ef48", MAC:"ca:5f:43:1f:9a:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:44.589241 containerd[1588]: 2026-01-17 00:42:44.536 [INFO][4990] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtq9l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:42:44.668450 kubelet[2796]: E0117 00:42:44.667922 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:42:44.751115 containerd[1588]: time="2026-01-17T00:42:44.751021871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2n27,Uid:12e8789b-d87c-447d-950e-1991d31141d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148\"" Jan 17 00:42:44.757539 systemd[1]: run-containerd-runc-k8s.io-f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312-runc.UhUeR2.mount: Deactivated successfully. Jan 17 00:42:44.769339 containerd[1588]: time="2026-01-17T00:42:44.759719348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:42:44.849118 containerd[1588]: time="2026-01-17T00:42:44.849069328Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:44.858533 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:44.865981 containerd[1588]: time="2026-01-17T00:42:44.854486433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:42:44.865981 containerd[1588]: time="2026-01-17T00:42:44.854680114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:42:44.866088 kubelet[2796]: E0117 00:42:44.857066 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:42:44.866088 kubelet[2796]: E0117 00:42:44.857131 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:42:44.866088 kubelet[2796]: E0117 00:42:44.858255 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qc8z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g2n27_calico-system(12e8789b-d87c-447d-950e-1991d31141d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:44.866088 kubelet[2796]: E0117 00:42:44.865272 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:42:44.917273 containerd[1588]: time="2026-01-17T00:42:44.914310285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:44.917273 containerd[1588]: time="2026-01-17T00:42:44.914543571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:44.917273 containerd[1588]: time="2026-01-17T00:42:44.914567675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:44.918689 containerd[1588]: time="2026-01-17T00:42:44.917645628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:44.920378 systemd-networkd[1250]: caliafa8f209887: Link UP Jan 17 00:42:44.937934 systemd-networkd[1250]: caliafa8f209887: Gained carrier Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.219 [INFO][5070] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0 coredns-668d6bf9bc- kube-system bfbb8f9c-282b-42d0-90d7-a8ecd35e843f 1150 0 2026-01-17 00:41:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mvkvx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliafa8f209887 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.220 [INFO][5070] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.456 [INFO][5126] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" HandleID="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.462 [INFO][5126] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" HandleID="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mvkvx", "timestamp":"2026-01-17 00:42:44.456727927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.463 [INFO][5126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.463 [INFO][5126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.464 [INFO][5126] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.530 [INFO][5126] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.627 [INFO][5126] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.753 [INFO][5126] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.766 [INFO][5126] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.772 [INFO][5126] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.773 [INFO][5126] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.794 [INFO][5126] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6 Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.823 [INFO][5126] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.871 [INFO][5126] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.871 [INFO][5126] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" host="localhost" Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.871 [INFO][5126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:42:45.006811 containerd[1588]: 2026-01-17 00:42:44.871 [INFO][5126] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" HandleID="k8s-pod-network.ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:45.010432 containerd[1588]: 2026-01-17 00:42:44.889 [INFO][5070] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mvkvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafa8f209887", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:45.010432 containerd[1588]: 2026-01-17 00:42:44.889 [INFO][5070] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:45.010432 containerd[1588]: 2026-01-17 00:42:44.889 [INFO][5070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafa8f209887 ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:45.010432 containerd[1588]: 2026-01-17 00:42:44.934 [INFO][5070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:45.010432 containerd[1588]: 2026-01-17 00:42:44.936 [INFO][5070] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6", Pod:"coredns-668d6bf9bc-mvkvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafa8f209887", MAC:"3a:58:92:26:a9:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:42:45.010432 containerd[1588]: 2026-01-17 00:42:44.991 [INFO][5070] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6" Namespace="kube-system" Pod="coredns-668d6bf9bc-mvkvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:42:45.058739 containerd[1588]: time="2026-01-17T00:42:45.055807600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kh68,Uid:1d6f5cd7-ec64-4020-903c-bd9456eec0b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312\"" Jan 17 00:42:45.063981 containerd[1588]: time="2026-01-17T00:42:45.062806618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:42:45.111826 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:45.195144 containerd[1588]: time="2026-01-17T00:42:45.193517760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:42:45.195144 containerd[1588]: time="2026-01-17T00:42:45.193644347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:42:45.195144 containerd[1588]: time="2026-01-17T00:42:45.193673951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:45.195144 containerd[1588]: time="2026-01-17T00:42:45.193832547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:42:45.218128 containerd[1588]: time="2026-01-17T00:42:45.218074375Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:45.228254 containerd[1588]: time="2026-01-17T00:42:45.228015982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:42:45.231065 containerd[1588]: time="2026-01-17T00:42:45.230789596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:42:45.237698 kubelet[2796]: E0117 00:42:45.231448 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:42:45.238163 kubelet[2796]: E0117 00:42:45.238125 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:42:45.238654 kubelet[2796]: E0117 00:42:45.238488 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:45.250441 containerd[1588]: time="2026-01-17T00:42:45.249508340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:42:45.373894 containerd[1588]: time="2026-01-17T00:42:45.373772414Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:45.380489 containerd[1588]: time="2026-01-17T00:42:45.374636807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtq9l,Uid:80e5c881-067d-4192-8764-acc37b9b15b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d\"" Jan 17 00:42:45.390075 kubelet[2796]: E0117 00:42:45.386799 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:45.398283 containerd[1588]: time="2026-01-17T00:42:45.393485932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:42:45.398283 containerd[1588]: time="2026-01-17T00:42:45.393554441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:42:45.398953 kubelet[2796]: E0117 00:42:45.393778 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:42:45.398953 kubelet[2796]: E0117 00:42:45.393818 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:42:45.398953 kubelet[2796]: E0117 00:42:45.393932 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:45.401010 kubelet[2796]: E0117 00:42:45.399926 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:45.400627 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:42:45.408020 containerd[1588]: time="2026-01-17T00:42:45.407978972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:42:45.410799 containerd[1588]: time="2026-01-17T00:42:45.410668770Z" level=info msg="CreateContainer within sandbox \"d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:42:45.519372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170033947.mount: Deactivated successfully. Jan 17 00:42:45.549892 containerd[1588]: time="2026-01-17T00:42:45.549702039Z" level=info msg="CreateContainer within sandbox \"d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cce8185e8dc657d4f87cf744a7003909ed4abc5e9cd1dbbcdb838d7ff928d98b\"" Jan 17 00:42:45.552413 containerd[1588]: time="2026-01-17T00:42:45.552383011Z" level=info msg="StartContainer for \"cce8185e8dc657d4f87cf744a7003909ed4abc5e9cd1dbbcdb838d7ff928d98b\"" Jan 17 00:42:45.557434 containerd[1588]: time="2026-01-17T00:42:45.552936523Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:45.560126 containerd[1588]: time="2026-01-17T00:42:45.560078628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:42:45.560440 containerd[1588]: time="2026-01-17T00:42:45.560403755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:42:45.560703 kubelet[2796]: E0117 00:42:45.560655 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:42:45.561657 kubelet[2796]: E0117 00:42:45.560966 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:42:45.564026 kubelet[2796]: E0117 00:42:45.563880 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f6b2884151de4457ac6d07787b37b4d8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:45.564711 containerd[1588]: time="2026-01-17T00:42:45.564678991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mvkvx,Uid:bfbb8f9c-282b-42d0-90d7-a8ecd35e843f,Namespace:kube-system,Attempt:1,} returns sandbox id \"ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6\"" Jan 17 00:42:45.567978 kubelet[2796]: E0117 00:42:45.567541 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:45.568962 containerd[1588]: time="2026-01-17T00:42:45.568937725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:42:45.582885 containerd[1588]: time="2026-01-17T00:42:45.582696555Z" level=info msg="CreateContainer within sandbox \"ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:42:45.612951 systemd-networkd[1250]: cali11986ffb675: Gained IPv6LL Jan 17 00:42:45.679060 containerd[1588]: time="2026-01-17T00:42:45.674624356Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:45.690964 kubelet[2796]: E0117 00:42:45.690537 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:45.720271 containerd[1588]: time="2026-01-17T00:42:45.702293843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:42:45.720271 containerd[1588]: time="2026-01-17T00:42:45.704723216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:42:45.720271 containerd[1588]: time="2026-01-17T00:42:45.715275623Z" level=info msg="CreateContainer within sandbox \"ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b633e84db57a8b299cfbad54ff3cb398f5b2d75b79f08f0e91b05ef07d31595a\"" Jan 17 00:42:45.720271 containerd[1588]: time="2026-01-17T00:42:45.720112276Z" level=info msg="StartContainer for \"b633e84db57a8b299cfbad54ff3cb398f5b2d75b79f08f0e91b05ef07d31595a\"" Jan 17 00:42:45.732549 kubelet[2796]: E0117 00:42:45.706986 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:42:45.732549 kubelet[2796]: E0117 00:42:45.707028 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:42:45.732549 kubelet[2796]: E0117 00:42:45.707152 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:45.732549 kubelet[2796]: E0117 00:42:45.708722 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:42:45.732549 kubelet[2796]: E0117 00:42:45.719349 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:42:45.751680 kubelet[2796]: E0117 00:42:45.738406 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:42:45.995719 systemd-networkd[1250]: cali3084967ef48: Gained IPv6LL Jan 17 00:42:46.125559 containerd[1588]: time="2026-01-17T00:42:46.124034743Z" level=info msg="StartContainer for \"cce8185e8dc657d4f87cf744a7003909ed4abc5e9cd1dbbcdb838d7ff928d98b\" returns successfully" Jan 17 00:42:46.125559 containerd[1588]: time="2026-01-17T00:42:46.124070229Z" level=info msg="StartContainer for \"b633e84db57a8b299cfbad54ff3cb398f5b2d75b79f08f0e91b05ef07d31595a\" returns successfully" Jan 17 00:42:46.748246 kubelet[2796]: E0117 00:42:46.746941 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:46.785858 kubelet[2796]: E0117 00:42:46.777515 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:46.791420 systemd-networkd[1250]: caliafa8f209887: Gained IPv6LL Jan 17 00:42:46.811662 kubelet[2796]: E0117 00:42:46.811553 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:42:46.826299 kubelet[2796]: E0117 00:42:46.822543 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:42:46.869462 kubelet[2796]: I0117 00:42:46.867323 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mtq9l" podStartSLOduration=101.867286908 podStartE2EDuration="1m41.867286908s" podCreationTimestamp="2026-01-17 00:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:42:46.843712336 +0000 UTC m=+104.242557411" watchObservedRunningTime="2026-01-17 00:42:46.867286908 +0000 UTC m=+104.266131993" Jan 17 00:42:47.027044 kubelet[2796]: I0117 00:42:47.025341 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mvkvx" podStartSLOduration=102.025312875 podStartE2EDuration="1m42.025312875s" podCreationTimestamp="2026-01-17 00:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:42:47.021673974 +0000 UTC m=+104.420519059" watchObservedRunningTime="2026-01-17 00:42:47.025312875 +0000 UTC m=+104.424157960" Jan 17 00:42:47.797829 kubelet[2796]: E0117 00:42:47.794918 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:47.817345 kubelet[2796]: E0117 00:42:47.816422 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:48.810097 kubelet[2796]: E0117 00:42:48.803472 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:48.810097 kubelet[2796]: E0117 00:42:48.804553 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:49.858392 kubelet[2796]: E0117 00:42:49.854407 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:53.442912 containerd[1588]: time="2026-01-17T00:42:53.440085588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:42:53.529708 containerd[1588]: time="2026-01-17T00:42:53.529142379Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:53.545286 containerd[1588]: time="2026-01-17T00:42:53.543866039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:42:53.545286 containerd[1588]: time="2026-01-17T00:42:53.544000891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:42:53.545514 kubelet[2796]: E0117 00:42:53.544351 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:42:53.545514 kubelet[2796]: E0117 00:42:53.544420 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:42:53.545514 kubelet[2796]: E0117 00:42:53.544671 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjs9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-744d6dbcbc-9t986_calico-system(5e3f00cb-8452-40aa-ab56-8dc0975dc08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:53.546498 kubelet[2796]: E0117 00:42:53.546426 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:42:54.515967 containerd[1588]: time="2026-01-17T00:42:54.515088088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:42:54.629373 systemd[1]: Started sshd@9-10.0.0.123:22-10.0.0.1:55466.service - OpenSSH per-connection server daemon (10.0.0.1:55466). Jan 17 00:42:55.096028 containerd[1588]: time="2026-01-17T00:42:55.094654941Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:55.132794 containerd[1588]: time="2026-01-17T00:42:55.129449559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:42:55.132794 containerd[1588]: time="2026-01-17T00:42:55.129507005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:42:55.146840 kubelet[2796]: E0117 00:42:55.133442 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:55.146840 kubelet[2796]: E0117 00:42:55.133541 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:55.146840 kubelet[2796]: E0117 00:42:55.133949 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nk56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-bk9rz_calico-apiserver(aa612b8d-2f4c-467c-9d4c-78c8e06b8f95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:55.186908 kubelet[2796]: E0117 00:42:55.181660 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:42:55.502170 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 55466 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:42:55.542817 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:55.712667 systemd-logind[1555]: New session 10 of user core. Jan 17 00:42:55.745084 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:42:56.433932 sshd[5407]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:56.451035 systemd[1]: sshd@9-10.0.0.123:22-10.0.0.1:55466.service: Deactivated successfully. Jan 17 00:42:56.463893 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:42:56.466487 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:42:56.474554 systemd-logind[1555]: Removed session 10. Jan 17 00:42:57.404378 containerd[1588]: time="2026-01-17T00:42:57.393533155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:42:57.427389 kubelet[2796]: E0117 00:42:57.426064 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:42:57.563711 containerd[1588]: time="2026-01-17T00:42:57.562767362Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:57.573492 containerd[1588]: time="2026-01-17T00:42:57.573430848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:42:57.574144 containerd[1588]: time="2026-01-17T00:42:57.573910242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:42:57.575761 kubelet[2796]: E0117 00:42:57.575690 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:42:57.576954 kubelet[2796]: E0117 00:42:57.576682 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:42:57.576954 kubelet[2796]: E0117 00:42:57.576864 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qc8z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g2n27_calico-system(12e8789b-d87c-447d-950e-1991d31141d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:57.608911 kubelet[2796]: E0117 00:42:57.595328 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:42:58.186906 kubelet[2796]: E0117 00:42:58.182998 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:59.389465 containerd[1588]: time="2026-01-17T00:42:59.388972032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:42:59.515475 containerd[1588]: time="2026-01-17T00:42:59.512869044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:42:59.520893 containerd[1588]: time="2026-01-17T00:42:59.520674086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:42:59.520893 containerd[1588]: time="2026-01-17T00:42:59.520816142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:42:59.521471 kubelet[2796]: E0117 00:42:59.521366 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:59.522927 kubelet[2796]: E0117 00:42:59.522282 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:42:59.528553 kubelet[2796]: E0117 00:42:59.528328 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-xbslh_calico-apiserver(907968b1-857c-479e-a0ab-2b58db52b182): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:42:59.531125 kubelet[2796]: E0117 00:42:59.529958 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:43:01.398154 containerd[1588]: time="2026-01-17T00:43:01.393505889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:43:01.505702 systemd[1]: Started sshd@10-10.0.0.123:22-10.0.0.1:55476.service - OpenSSH per-connection server daemon (10.0.0.1:55476). Jan 17 00:43:01.526450 containerd[1588]: time="2026-01-17T00:43:01.526340559Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:01.532333 containerd[1588]: time="2026-01-17T00:43:01.532043622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:43:01.532333 containerd[1588]: time="2026-01-17T00:43:01.532257942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:43:01.534344 kubelet[2796]: E0117 00:43:01.533954 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:43:01.540554 kubelet[2796]: E0117 00:43:01.534370 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:43:01.540554 kubelet[2796]: E0117 00:43:01.534535 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:01.546846 containerd[1588]: time="2026-01-17T00:43:01.538155044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:43:01.695821 containerd[1588]: time="2026-01-17T00:43:01.695170310Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:01.701467 containerd[1588]: time="2026-01-17T00:43:01.701089303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:43:01.701467 containerd[1588]: time="2026-01-17T00:43:01.701268718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:43:01.705680 kubelet[2796]: E0117 00:43:01.705475 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:43:01.705680 kubelet[2796]: E0117 00:43:01.705649 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:43:01.706148 kubelet[2796]: E0117 00:43:01.705806 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:01.711751 kubelet[2796]: E0117 00:43:01.708703 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:43:01.822999 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 55476 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:01.840682 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:01.879806 systemd-logind[1555]: New session 11 of user core. Jan 17 00:43:01.898084 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:43:02.488541 sshd[5450]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:02.500068 systemd[1]: sshd@10-10.0.0.123:22-10.0.0.1:55476.service: Deactivated successfully. Jan 17 00:43:02.525719 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:43:02.527912 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:43:02.537137 systemd-logind[1555]: Removed session 11. Jan 17 00:43:05.424484 kubelet[2796]: E0117 00:43:05.420485 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:43:06.924742 containerd[1588]: time="2026-01-17T00:43:06.924238528Z" level=info msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\"" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.127 [WARNING][5479] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466", Pod:"calico-apiserver-7b478fd4fd-bk9rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8f2012f7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.128 [INFO][5479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.129 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" iface="eth0" netns="" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.129 [INFO][5479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.129 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.268 [INFO][5488] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.269 [INFO][5488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.269 [INFO][5488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.286 [WARNING][5488] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.287 [INFO][5488] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.292 [INFO][5488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:07.307003 containerd[1588]: 2026-01-17 00:43:07.299 [INFO][5479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.307003 containerd[1588]: time="2026-01-17T00:43:07.306725434Z" level=info msg="TearDown network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" successfully" Jan 17 00:43:07.307003 containerd[1588]: time="2026-01-17T00:43:07.306760660Z" level=info msg="StopPodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" returns successfully" Jan 17 00:43:07.310666 containerd[1588]: time="2026-01-17T00:43:07.309403548Z" level=info msg="RemovePodSandbox for \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\"" Jan 17 00:43:07.326758 containerd[1588]: time="2026-01-17T00:43:07.324242788Z" level=info msg="Forcibly stopping sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\"" Jan 17 00:43:07.516165 systemd[1]: Started sshd@11-10.0.0.123:22-10.0.0.1:32782.service - OpenSSH per-connection server daemon (10.0.0.1:32782). Jan 17 00:43:07.623528 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 32782 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:07.627842 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:07.658877 systemd-logind[1555]: New session 12 of user core. Jan 17 00:43:07.672142 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.512 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa612b8d-2f4c-467c-9d4c-78c8e06b8f95", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38653473e7e640302d631362c5f5ff2cdbbe8c64548280024a59100bd35b7466", Pod:"calico-apiserver-7b478fd4fd-bk9rz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8f2012f7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.515 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.515 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" iface="eth0" netns="" Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.515 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.515 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.657 [INFO][5515] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.658 [INFO][5515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.658 [INFO][5515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.695 [WARNING][5515] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.695 [INFO][5515] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" HandleID="k8s-pod-network.6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--bk9rz-eth0" Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.707 [INFO][5515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:07.722691 containerd[1588]: 2026-01-17 00:43:07.715 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838" Jan 17 00:43:07.723735 containerd[1588]: time="2026-01-17T00:43:07.723700494Z" level=info msg="TearDown network for sandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" successfully" Jan 17 00:43:07.752843 containerd[1588]: time="2026-01-17T00:43:07.752783209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:07.753158 containerd[1588]: time="2026-01-17T00:43:07.753132640Z" level=info msg="RemovePodSandbox \"6043efd644b0293f7333d267e329128e3d4baedfc947521fa745a89e4c295838\" returns successfully" Jan 17 00:43:07.784442 containerd[1588]: time="2026-01-17T00:43:07.784358653Z" level=info msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\"" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.885 [WARNING][5545] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"80e5c881-067d-4192-8764-acc37b9b15b6", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d", Pod:"coredns-668d6bf9bc-mtq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3084967ef48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.886 [INFO][5545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.886 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" iface="eth0" netns="" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.886 [INFO][5545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.886 [INFO][5545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.934 [INFO][5554] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.934 [INFO][5554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.934 [INFO][5554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.955 [WARNING][5554] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.955 [INFO][5554] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.964 [INFO][5554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:07.979130 containerd[1588]: 2026-01-17 00:43:07.968 [INFO][5545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:07.983243 containerd[1588]: time="2026-01-17T00:43:07.980999402Z" level=info msg="TearDown network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" successfully" Jan 17 00:43:07.983243 containerd[1588]: time="2026-01-17T00:43:07.981047522Z" level=info msg="StopPodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" returns successfully" Jan 17 00:43:07.983243 containerd[1588]: time="2026-01-17T00:43:07.981920150Z" level=info msg="RemovePodSandbox for \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\"" Jan 17 00:43:07.983243 containerd[1588]: time="2026-01-17T00:43:07.981959874Z" level=info msg="Forcibly stopping sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\"" Jan 17 00:43:08.077360 sshd[5513]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:08.094742 systemd[1]: sshd@11-10.0.0.123:22-10.0.0.1:32782.service: Deactivated successfully. Jan 17 00:43:08.111158 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:43:08.114167 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:43:08.119305 systemd-logind[1555]: Removed session 12. Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.171 [WARNING][5572] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"80e5c881-067d-4192-8764-acc37b9b15b6", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5d3b078b0669bf5a059b88f99a1619167fd635b18444bde033a61b6f6c4b63d", Pod:"coredns-668d6bf9bc-mtq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3084967ef48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.171 [INFO][5572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.171 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" iface="eth0" netns="" Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.171 [INFO][5572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.171 [INFO][5572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.255 [INFO][5584] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.255 [INFO][5584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.256 [INFO][5584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.276 [WARNING][5584] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.277 [INFO][5584] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" HandleID="k8s-pod-network.146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Workload="localhost-k8s-coredns--668d6bf9bc--mtq9l-eth0" Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.286 [INFO][5584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:08.310322 containerd[1588]: 2026-01-17 00:43:08.300 [INFO][5572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f" Jan 17 00:43:08.310322 containerd[1588]: time="2026-01-17T00:43:08.309650204Z" level=info msg="TearDown network for sandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" successfully" Jan 17 00:43:08.326568 containerd[1588]: time="2026-01-17T00:43:08.326333501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:08.326568 containerd[1588]: time="2026-01-17T00:43:08.326430643Z" level=info msg="RemovePodSandbox \"146953b4d0f4a2978116edb7cc0c797320cefea62c5639e04435cd1979337b3f\" returns successfully" Jan 17 00:43:08.327424 containerd[1588]: time="2026-01-17T00:43:08.327317524Z" level=info msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\"" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.492 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"907968b1-857c-479e-a0ab-2b58db52b182", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a", Pod:"calico-apiserver-7b478fd4fd-xbslh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01c6e9d7241", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.494 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.494 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" iface="eth0" netns="" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.495 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.495 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.568 [INFO][5609] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.568 [INFO][5609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.568 [INFO][5609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.583 [WARNING][5609] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.584 [INFO][5609] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.592 [INFO][5609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:08.608998 containerd[1588]: 2026-01-17 00:43:08.599 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.608998 containerd[1588]: time="2026-01-17T00:43:08.608942791Z" level=info msg="TearDown network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" successfully" Jan 17 00:43:08.608998 containerd[1588]: time="2026-01-17T00:43:08.608982095Z" level=info msg="StopPodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" returns successfully" Jan 17 00:43:08.611050 containerd[1588]: time="2026-01-17T00:43:08.610754972Z" level=info msg="RemovePodSandbox for \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\"" Jan 17 00:43:08.611050 containerd[1588]: time="2026-01-17T00:43:08.610827418Z" level=info msg="Forcibly stopping sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\"" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.770 [WARNING][5626] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0", GenerateName:"calico-apiserver-7b478fd4fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"907968b1-857c-479e-a0ab-2b58db52b182", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b478fd4fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fa0ae7bf6270951f3841501c99392c24af67fe76116f7e3bab03a1b26b3824a", Pod:"calico-apiserver-7b478fd4fd-xbslh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01c6e9d7241", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.771 [INFO][5626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.771 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" iface="eth0" netns="" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.771 [INFO][5626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.771 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.934 [INFO][5634] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.938 [INFO][5634] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.938 [INFO][5634] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.965 [WARNING][5634] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.965 [INFO][5634] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" HandleID="k8s-pod-network.18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Workload="localhost-k8s-calico--apiserver--7b478fd4fd--xbslh-eth0" Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.979 [INFO][5634] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:08.994422 containerd[1588]: 2026-01-17 00:43:08.985 [INFO][5626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924" Jan 17 00:43:08.994422 containerd[1588]: time="2026-01-17T00:43:08.994347871Z" level=info msg="TearDown network for sandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" successfully" Jan 17 00:43:09.008851 containerd[1588]: time="2026-01-17T00:43:09.008693843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:09.009021 containerd[1588]: time="2026-01-17T00:43:09.008883237Z" level=info msg="RemovePodSandbox \"18bd103772d11138bca38d6845863ba4d5470d98a817866db21139b99b233924\" returns successfully" Jan 17 00:43:09.009815 containerd[1588]: time="2026-01-17T00:43:09.009773869Z" level=info msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\"" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.297 [WARNING][5651] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g2n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"12e8789b-d87c-447d-950e-1991d31141d1", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148", Pod:"goldmane-666569f655-g2n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali35ac3e1189c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.300 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.300 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" iface="eth0" netns="" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.300 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.300 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.412 [INFO][5660] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.412 [INFO][5660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.413 [INFO][5660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.430 [WARNING][5660] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.430 [INFO][5660] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.440 [INFO][5660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:09.470664 containerd[1588]: 2026-01-17 00:43:09.453 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.470664 containerd[1588]: time="2026-01-17T00:43:09.461876760Z" level=info msg="TearDown network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" successfully" Jan 17 00:43:09.470664 containerd[1588]: time="2026-01-17T00:43:09.461909140Z" level=info msg="StopPodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" returns successfully" Jan 17 00:43:09.470664 containerd[1588]: time="2026-01-17T00:43:09.462539747Z" level=info msg="RemovePodSandbox for \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\"" Jan 17 00:43:09.470664 containerd[1588]: time="2026-01-17T00:43:09.462579682Z" level=info msg="Forcibly stopping sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\"" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.628 [WARNING][5676] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g2n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"12e8789b-d87c-447d-950e-1991d31141d1", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c564602e0335c96b0f85f2166077ab66393adf05ce823af04ed8d3f099778148", Pod:"goldmane-666569f655-g2n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali35ac3e1189c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.632 [INFO][5676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.632 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" iface="eth0" netns="" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.632 [INFO][5676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.632 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.726 [INFO][5684] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.726 [INFO][5684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.726 [INFO][5684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.743 [WARNING][5684] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.743 [INFO][5684] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" HandleID="k8s-pod-network.e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Workload="localhost-k8s-goldmane--666569f655--g2n27-eth0" Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.751 [INFO][5684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:09.779389 containerd[1588]: 2026-01-17 00:43:09.765 [INFO][5676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580" Jan 17 00:43:09.779997 containerd[1588]: time="2026-01-17T00:43:09.779377100Z" level=info msg="TearDown network for sandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" successfully" Jan 17 00:43:09.795986 containerd[1588]: time="2026-01-17T00:43:09.792682014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:09.795986 containerd[1588]: time="2026-01-17T00:43:09.792768906Z" level=info msg="RemovePodSandbox \"e743773d5257b6cdf8c7595a12a1bd9cf37af0b0a68f9c28a93b951984acd580\" returns successfully" Jan 17 00:43:09.795986 containerd[1588]: time="2026-01-17T00:43:09.793392209Z" level=info msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\"" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:09.989 [WARNING][5702] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0", GenerateName:"calico-kube-controllers-744d6dbcbc-", Namespace:"calico-system", SelfLink:"", UID:"5e3f00cb-8452-40aa-ab56-8dc0975dc08a", ResourceVersion:"1391", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744d6dbcbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c", Pod:"calico-kube-controllers-744d6dbcbc-9t986", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5862449bb2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:09.990 [INFO][5702] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:09.990 [INFO][5702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" iface="eth0" netns="" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:09.990 [INFO][5702] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:09.990 [INFO][5702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:10.137 [INFO][5710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:10.137 [INFO][5710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:10.137 [INFO][5710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:10.168 [WARNING][5710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:10.168 [INFO][5710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:10.186 [INFO][5710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:10.202319 containerd[1588]: 2026-01-17 00:43:10.195 [INFO][5702] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.203874 containerd[1588]: time="2026-01-17T00:43:10.203830727Z" level=info msg="TearDown network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" successfully" Jan 17 00:43:10.203968 containerd[1588]: time="2026-01-17T00:43:10.203945412Z" level=info msg="StopPodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" returns successfully" Jan 17 00:43:10.205499 containerd[1588]: time="2026-01-17T00:43:10.205296903Z" level=info msg="RemovePodSandbox for \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\"" Jan 17 00:43:10.205499 containerd[1588]: time="2026-01-17T00:43:10.205336397Z" level=info msg="Forcibly stopping sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\"" Jan 17 00:43:10.388276 kubelet[2796]: E0117 00:43:10.385345 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:43:10.393001 containerd[1588]: time="2026-01-17T00:43:10.389120001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.351 [WARNING][5726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0", GenerateName:"calico-kube-controllers-744d6dbcbc-", Namespace:"calico-system", SelfLink:"", UID:"5e3f00cb-8452-40aa-ab56-8dc0975dc08a", ResourceVersion:"1391", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744d6dbcbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4853530eb95702154d657b0b5ccd60bbacc2744b5da86ebc2689875912c7315c", Pod:"calico-kube-controllers-744d6dbcbc-9t986", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5862449bb2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.352 [INFO][5726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.352 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" iface="eth0" netns="" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.352 [INFO][5726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.352 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.472 [INFO][5734] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.473 [INFO][5734] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.474 [INFO][5734] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.526 [WARNING][5734] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.526 [INFO][5734] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" HandleID="k8s-pod-network.2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Workload="localhost-k8s-calico--kube--controllers--744d6dbcbc--9t986-eth0" Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.533 [INFO][5734] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:10.554592 containerd[1588]: 2026-01-17 00:43:10.539 [INFO][5726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340" Jan 17 00:43:10.554592 containerd[1588]: time="2026-01-17T00:43:10.547983631Z" level=info msg="TearDown network for sandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" successfully" Jan 17 00:43:10.554592 containerd[1588]: time="2026-01-17T00:43:10.548688318Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:10.561777 containerd[1588]: time="2026-01-17T00:43:10.561352634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:43:10.561777 containerd[1588]: time="2026-01-17T00:43:10.561691988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:43:10.564132 kubelet[2796]: E0117 00:43:10.563295 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:43:10.564132 kubelet[2796]: E0117 00:43:10.563364 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:43:10.564132 kubelet[2796]: E0117 00:43:10.563512 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f6b2884151de4457ac6d07787b37b4d8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:10.572144 containerd[1588]: time="2026-01-17T00:43:10.571533223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:43:10.575311 containerd[1588]: time="2026-01-17T00:43:10.571933246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:10.575311 containerd[1588]: time="2026-01-17T00:43:10.574140878Z" level=info msg="RemovePodSandbox \"2fd61b158b1a4f4b552d255de0ff192cc99bcd2530e55a10389ef477a3a22340\" returns successfully" Jan 17 00:43:10.577162 containerd[1588]: time="2026-01-17T00:43:10.576866700Z" level=info msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\"" Jan 17 00:43:10.680165 containerd[1588]: time="2026-01-17T00:43:10.679909552Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:10.683780 containerd[1588]: time="2026-01-17T00:43:10.683121164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:43:10.683780 containerd[1588]: time="2026-01-17T00:43:10.683329433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:43:10.687550 kubelet[2796]: E0117 00:43:10.684329 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:43:10.687550 kubelet[2796]: E0117 00:43:10.684445 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:43:10.687550 kubelet[2796]: E0117 00:43:10.687416 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:10.690976 kubelet[2796]: E0117 00:43:10.688934 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.820 [WARNING][5751] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kh68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d6f5cd7-ec64-4020-903c-bd9456eec0b4", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312", Pod:"csi-node-driver-7kh68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali11986ffb675", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.820 [INFO][5751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.820 [INFO][5751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" iface="eth0" netns="" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.820 [INFO][5751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.820 [INFO][5751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.909 [INFO][5759] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.909 [INFO][5759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.910 [INFO][5759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.934 [WARNING][5759] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.934 [INFO][5759] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.952 [INFO][5759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:10.975813 containerd[1588]: 2026-01-17 00:43:10.963 [INFO][5751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:10.975813 containerd[1588]: time="2026-01-17T00:43:10.972892581Z" level=info msg="TearDown network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" successfully" Jan 17 00:43:10.975813 containerd[1588]: time="2026-01-17T00:43:10.972932034Z" level=info msg="StopPodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" returns successfully" Jan 17 00:43:10.975813 containerd[1588]: time="2026-01-17T00:43:10.974414691Z" level=info msg="RemovePodSandbox for \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\"" Jan 17 00:43:10.975813 containerd[1588]: time="2026-01-17T00:43:10.974448493Z" level=info msg="Forcibly stopping sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\"" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.207 [WARNING][5776] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kh68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d6f5cd7-ec64-4020-903c-bd9456eec0b4", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3624913c7f8a75588277ffeba90fc84d28706d58edd5b0c5a72bc66090c1312", Pod:"csi-node-driver-7kh68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali11986ffb675", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.208 [INFO][5776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.208 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" iface="eth0" netns="" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.208 [INFO][5776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.208 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.293 [INFO][5785] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.293 [INFO][5785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.293 [INFO][5785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.309 [WARNING][5785] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.309 [INFO][5785] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" HandleID="k8s-pod-network.beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Workload="localhost-k8s-csi--node--driver--7kh68-eth0" Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.326 [INFO][5785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:11.357724 containerd[1588]: 2026-01-17 00:43:11.340 [INFO][5776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31" Jan 17 00:43:11.357724 containerd[1588]: time="2026-01-17T00:43:11.354840671Z" level=info msg="TearDown network for sandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" successfully" Jan 17 00:43:11.390063 containerd[1588]: time="2026-01-17T00:43:11.389806110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:11.390390 containerd[1588]: time="2026-01-17T00:43:11.390089649Z" level=info msg="RemovePodSandbox \"beacbbd592765f72cb18eb0a2f7acc5facff59d1520ac5375f18eda30ba50d31\" returns successfully" Jan 17 00:43:11.391712 containerd[1588]: time="2026-01-17T00:43:11.391652282Z" level=info msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\"" Jan 17 00:43:11.401295 kubelet[2796]: E0117 00:43:11.398391 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.578 [WARNING][5802] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" WorkloadEndpoint="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.579 [INFO][5802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.579 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" iface="eth0" netns="" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.579 [INFO][5802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.579 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.697 [INFO][5810] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.698 [INFO][5810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.700 [INFO][5810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.726 [WARNING][5810] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.726 [INFO][5810] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.730 [INFO][5810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:11.751526 containerd[1588]: 2026-01-17 00:43:11.737 [INFO][5802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:11.751526 containerd[1588]: time="2026-01-17T00:43:11.751098225Z" level=info msg="TearDown network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" successfully" Jan 17 00:43:11.751526 containerd[1588]: time="2026-01-17T00:43:11.751131426Z" level=info msg="StopPodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" returns successfully" Jan 17 00:43:11.755728 containerd[1588]: time="2026-01-17T00:43:11.754404563Z" level=info msg="RemovePodSandbox for \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\"" Jan 17 00:43:11.755728 containerd[1588]: time="2026-01-17T00:43:11.754485415Z" level=info msg="Forcibly stopping sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\"" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:11.988 [WARNING][5828] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" WorkloadEndpoint="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:11.990 [INFO][5828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:11.990 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" iface="eth0" netns="" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:11.990 [INFO][5828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:11.990 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:12.101 [INFO][5837] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:12.101 [INFO][5837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:12.101 [INFO][5837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:12.172 [WARNING][5837] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:12.175 [INFO][5837] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" HandleID="k8s-pod-network.2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Workload="localhost-k8s-whisker--6889d59764--9h7nx-eth0" Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:12.192 [INFO][5837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:12.216708 containerd[1588]: 2026-01-17 00:43:12.201 [INFO][5828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c" Jan 17 00:43:12.216708 containerd[1588]: time="2026-01-17T00:43:12.216001672Z" level=info msg="TearDown network for sandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" successfully" Jan 17 00:43:12.237563 containerd[1588]: time="2026-01-17T00:43:12.236469466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:12.237563 containerd[1588]: time="2026-01-17T00:43:12.236584612Z" level=info msg="RemovePodSandbox \"2db2c1146d56ab29af2d145825df9fc1ec4d391bd1a3aa14eaf923742682865c\" returns successfully" Jan 17 00:43:12.239090 containerd[1588]: time="2026-01-17T00:43:12.238975610Z" level=info msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\"" Jan 17 00:43:12.391486 kubelet[2796]: E0117 00:43:12.391371 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:43:12.391821 kubelet[2796]: E0117 00:43:12.391562 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.423 [WARNING][5861] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6", Pod:"coredns-668d6bf9bc-mvkvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafa8f209887", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.423 [INFO][5861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.423 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" iface="eth0" netns="" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.423 [INFO][5861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.423 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.573 [INFO][5869] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.573 [INFO][5869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.574 [INFO][5869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.609 [WARNING][5869] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.609 [INFO][5869] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.630 [INFO][5869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:12.651855 containerd[1588]: 2026-01-17 00:43:12.642 [INFO][5861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:12.660975 containerd[1588]: time="2026-01-17T00:43:12.653762268Z" level=info msg="TearDown network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" successfully" Jan 17 00:43:12.660975 containerd[1588]: time="2026-01-17T00:43:12.653798095Z" level=info msg="StopPodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" returns successfully" Jan 17 00:43:12.672913 containerd[1588]: time="2026-01-17T00:43:12.666423795Z" level=info msg="RemovePodSandbox for \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\"" Jan 17 00:43:12.672913 containerd[1588]: time="2026-01-17T00:43:12.666483727Z" level=info msg="Forcibly stopping sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\"" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.861 [WARNING][5885] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfbb8f9c-282b-42d0-90d7-a8ecd35e843f", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef5dc0057c7bc57e5f6d6b1c462e00a17c1346b530a2fed519004ce777460cf6", Pod:"coredns-668d6bf9bc-mvkvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafa8f209887", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.862 [INFO][5885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.863 [INFO][5885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" iface="eth0" netns="" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.863 [INFO][5885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.863 [INFO][5885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.966 [INFO][5893] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.972 [INFO][5893] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.973 [INFO][5893] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.987 [WARNING][5893] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.987 [INFO][5893] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" HandleID="k8s-pod-network.9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Workload="localhost-k8s-coredns--668d6bf9bc--mvkvx-eth0" Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.992 [INFO][5893] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:43:13.005378 containerd[1588]: 2026-01-17 00:43:12.999 [INFO][5885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a" Jan 17 00:43:13.005378 containerd[1588]: time="2026-01-17T00:43:13.005088753Z" level=info msg="TearDown network for sandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" successfully" Jan 17 00:43:13.045368 containerd[1588]: time="2026-01-17T00:43:13.043732846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:43:13.045368 containerd[1588]: time="2026-01-17T00:43:13.043819637Z" level=info msg="RemovePodSandbox \"9ac22dd00d1972f47b48807072fdc0b1e4be220db85b2ad66c2bc60e410fdb6a\" returns successfully" Jan 17 00:43:13.101856 systemd[1]: Started sshd@12-10.0.0.123:22-10.0.0.1:52294.service - OpenSSH per-connection server daemon (10.0.0.1:52294). Jan 17 00:43:13.293960 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 52294 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:13.306806 sshd[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:13.335068 systemd-logind[1555]: New session 13 of user core. Jan 17 00:43:13.349269 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:43:13.748888 sshd[5900]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:13.763543 systemd[1]: sshd@12-10.0.0.123:22-10.0.0.1:52294.service: Deactivated successfully. Jan 17 00:43:13.786368 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:43:13.786540 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:43:13.803095 systemd-logind[1555]: Removed session 13. Jan 17 00:43:17.384969 kubelet[2796]: E0117 00:43:17.384760 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:18.778318 systemd[1]: Started sshd@13-10.0.0.123:22-10.0.0.1:52300.service - OpenSSH per-connection server daemon (10.0.0.1:52300). Jan 17 00:43:18.874253 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 52300 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:18.878859 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:18.896692 systemd-logind[1555]: New session 14 of user core. Jan 17 00:43:18.904822 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:43:19.157491 sshd[5916]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:19.178969 systemd[1]: Started sshd@14-10.0.0.123:22-10.0.0.1:52314.service - OpenSSH per-connection server daemon (10.0.0.1:52314). Jan 17 00:43:19.182813 systemd[1]: sshd@13-10.0.0.123:22-10.0.0.1:52300.service: Deactivated successfully. Jan 17 00:43:19.188900 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:43:19.195905 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:43:19.207805 systemd-logind[1555]: Removed session 14. Jan 17 00:43:19.250965 sshd[5929]: Accepted publickey for core from 10.0.0.1 port 52314 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:19.254093 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:19.285848 systemd-logind[1555]: New session 15 of user core. Jan 17 00:43:19.307011 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:43:19.402999 containerd[1588]: time="2026-01-17T00:43:19.399251407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:43:19.503967 containerd[1588]: time="2026-01-17T00:43:19.503053037Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:19.517004 containerd[1588]: time="2026-01-17T00:43:19.514083117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:43:19.517004 containerd[1588]: time="2026-01-17T00:43:19.514309900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:43:19.517266 kubelet[2796]: E0117 00:43:19.514549 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:43:19.517266 kubelet[2796]: E0117 00:43:19.514681 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:43:19.517266 kubelet[2796]: E0117 00:43:19.514873 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjs9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-744d6dbcbc-9t986_calico-system(5e3f00cb-8452-40aa-ab56-8dc0975dc08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:19.519519 kubelet[2796]: E0117 00:43:19.519331 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:43:19.813286 sshd[5929]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:19.820475 systemd[1]: sshd@14-10.0.0.123:22-10.0.0.1:52314.service: Deactivated successfully. Jan 17 00:43:19.842898 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:43:19.845091 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:43:19.867712 systemd[1]: Started sshd@15-10.0.0.123:22-10.0.0.1:52316.service - OpenSSH per-connection server daemon (10.0.0.1:52316). Jan 17 00:43:19.873657 systemd-logind[1555]: Removed session 15. Jan 17 00:43:20.001260 sshd[5945]: Accepted publickey for core from 10.0.0.1 port 52316 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:20.010297 sshd[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:20.034823 systemd-logind[1555]: New session 16 of user core. Jan 17 00:43:20.043423 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:43:20.268276 sshd[5945]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:20.279120 systemd[1]: sshd@15-10.0.0.123:22-10.0.0.1:52316.service: Deactivated successfully. Jan 17 00:43:20.292432 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:43:20.295897 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:43:20.301546 systemd-logind[1555]: Removed session 16. Jan 17 00:43:21.398383 containerd[1588]: time="2026-01-17T00:43:21.397138883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:43:21.492118 containerd[1588]: time="2026-01-17T00:43:21.490549745Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:21.497864 containerd[1588]: time="2026-01-17T00:43:21.497301811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:43:21.497864 containerd[1588]: time="2026-01-17T00:43:21.497434860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:43:21.498097 kubelet[2796]: E0117 00:43:21.497706 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:43:21.498097 kubelet[2796]: E0117 00:43:21.497771 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:43:21.498097 kubelet[2796]: E0117 00:43:21.497938 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nk56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-bk9rz_calico-apiserver(aa612b8d-2f4c-467c-9d4c-78c8e06b8f95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:21.504890 kubelet[2796]: E0117 00:43:21.504793 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:43:22.382676 kubelet[2796]: E0117 00:43:22.382486 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:23.392876 containerd[1588]: time="2026-01-17T00:43:23.391125706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:43:23.469151 containerd[1588]: time="2026-01-17T00:43:23.469001965Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:23.473403 containerd[1588]: time="2026-01-17T00:43:23.473251428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:43:23.473403 containerd[1588]: time="2026-01-17T00:43:23.473370720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:43:23.474521 kubelet[2796]: E0117 00:43:23.474452 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:43:23.475427 kubelet[2796]: E0117 00:43:23.474526 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:43:23.475427 kubelet[2796]: E0117 00:43:23.474708 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:23.479304 containerd[1588]: time="2026-01-17T00:43:23.479141248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:43:23.571072 containerd[1588]: time="2026-01-17T00:43:23.570133351Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:23.577027 containerd[1588]: time="2026-01-17T00:43:23.576880961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:43:23.577304 containerd[1588]: time="2026-01-17T00:43:23.577046189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:43:23.579719 kubelet[2796]: E0117 00:43:23.578334 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:43:23.579719 kubelet[2796]: E0117 00:43:23.578441 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:43:23.579719 kubelet[2796]: E0117 00:43:23.579041 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:23.580814 kubelet[2796]: E0117 00:43:23.580478 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:43:25.323817 systemd[1]: Started sshd@16-10.0.0.123:22-10.0.0.1:34426.service - OpenSSH per-connection server daemon (10.0.0.1:34426). Jan 17 00:43:25.423082 kubelet[2796]: E0117 00:43:25.419016 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:43:25.572594 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 34426 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:25.577309 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:25.609062 systemd-logind[1555]: New session 17 of user core. Jan 17 00:43:25.631138 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:43:26.104860 sshd[5962]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:26.131480 systemd[1]: sshd@16-10.0.0.123:22-10.0.0.1:34426.service: Deactivated successfully. Jan 17 00:43:26.179359 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:43:26.186877 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:43:26.200535 systemd-logind[1555]: Removed session 17. Jan 17 00:43:47.703120 systemd[1]: Started sshd@17-10.0.0.123:22-10.0.0.1:34436.service - OpenSSH per-connection server daemon (10.0.0.1:34436). Jan 17 00:43:48.943788 sshd[5978]: Accepted publickey for core from 10.0.0.1 port 34436 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:48.968865 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:48.993375 containerd[1588]: time="2026-01-17T00:43:48.993291163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:43:49.006135 kubelet[2796]: E0117 00:43:49.003590 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:49.036619 systemd-logind[1555]: New session 18 of user core. Jan 17 00:43:49.051774 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:43:49.067565 kubelet[2796]: E0117 00:43:49.042350 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:43:49.067565 kubelet[2796]: E0117 00:43:49.042696 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:43:49.067565 kubelet[2796]: E0117 00:43:49.043719 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:43:49.067565 kubelet[2796]: E0117 00:43:49.044144 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:43:49.296377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ab9af56f0b5c782f7053b1d5c5bb6ec22bc27a284538357bf31f86b9b995909-rootfs.mount: Deactivated successfully. Jan 17 00:43:49.341246 containerd[1588]: time="2026-01-17T00:43:49.303251715Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:49.341246 containerd[1588]: time="2026-01-17T00:43:49.316335658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:43:49.341246 containerd[1588]: time="2026-01-17T00:43:49.316513630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:43:49.349362 kubelet[2796]: E0117 00:43:49.349079 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:43:49.351238 kubelet[2796]: E0117 00:43:49.349454 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:43:49.364161 containerd[1588]: time="2026-01-17T00:43:49.364064484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:43:49.434800 kubelet[2796]: E0117 00:43:49.434622 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qc8z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g2n27_calico-system(12e8789b-d87c-447d-950e-1991d31141d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:49.438308 kubelet[2796]: E0117 00:43:49.437460 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:43:49.514829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b9d799b37edf874f6a0651b5585f29adb085d2a7d8d8401f1d2120691466618-rootfs.mount: Deactivated successfully. Jan 17 00:43:49.535246 containerd[1588]: time="2026-01-17T00:43:49.534728515Z" level=info msg="shim disconnected" id=8ab9af56f0b5c782f7053b1d5c5bb6ec22bc27a284538357bf31f86b9b995909 namespace=k8s.io Jan 17 00:43:49.535246 containerd[1588]: time="2026-01-17T00:43:49.535001455Z" level=warning msg="cleaning up after shim disconnected" id=8ab9af56f0b5c782f7053b1d5c5bb6ec22bc27a284538357bf31f86b9b995909 namespace=k8s.io Jan 17 00:43:49.535246 containerd[1588]: time="2026-01-17T00:43:49.535013668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:49.556434 containerd[1588]: time="2026-01-17T00:43:49.554814514Z" level=info msg="shim disconnected" id=1b9d799b37edf874f6a0651b5585f29adb085d2a7d8d8401f1d2120691466618 namespace=k8s.io Jan 17 00:43:49.556434 containerd[1588]: time="2026-01-17T00:43:49.554880607Z" level=warning msg="cleaning up after shim disconnected" id=1b9d799b37edf874f6a0651b5585f29adb085d2a7d8d8401f1d2120691466618 namespace=k8s.io Jan 17 00:43:49.556434 containerd[1588]: time="2026-01-17T00:43:49.554891868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:49.751423 containerd[1588]: time="2026-01-17T00:43:49.744922047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:43:49.754150 sshd[5978]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:49.770555 systemd[1]: sshd@17-10.0.0.123:22-10.0.0.1:34436.service: Deactivated successfully. Jan 17 00:43:49.809582 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:43:49.816595 containerd[1588]: time="2026-01-17T00:43:49.814465167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:43:49.817250 containerd[1588]: time="2026-01-17T00:43:49.817156039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:43:49.822632 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:43:49.825562 kubelet[2796]: E0117 00:43:49.825324 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:43:49.826383 kubelet[2796]: E0117 00:43:49.825811 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:43:49.830976 systemd-logind[1555]: Removed session 18. Jan 17 00:43:49.848323 kubelet[2796]: E0117 00:43:49.848246 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-xbslh_calico-apiserver(907968b1-857c-479e-a0ab-2b58db52b182): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:43:49.853369 kubelet[2796]: E0117 00:43:49.853329 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:43:50.076350 kubelet[2796]: I0117 00:43:50.071137 2796 scope.go:117] "RemoveContainer" containerID="8ab9af56f0b5c782f7053b1d5c5bb6ec22bc27a284538357bf31f86b9b995909" Jan 17 00:43:50.076350 kubelet[2796]: E0117 00:43:50.071789 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:50.089808 kubelet[2796]: I0117 00:43:50.088384 2796 scope.go:117] "RemoveContainer" containerID="1b9d799b37edf874f6a0651b5585f29adb085d2a7d8d8401f1d2120691466618" Jan 17 00:43:50.089808 kubelet[2796]: E0117 00:43:50.088507 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:50.102954 containerd[1588]: time="2026-01-17T00:43:50.102899975Z" level=info msg="CreateContainer within sandbox \"612972b69e307d88da58f27362d0257922bfb36ab50b00036484e8f2dda7ee19\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:43:50.116042 containerd[1588]: time="2026-01-17T00:43:50.115802830Z" level=info msg="CreateContainer within sandbox \"9dd5d5f6b1483f8973a58bf33d35c7020bfad8fe3201771c4c70bcef90a84a2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:43:50.244966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958527872.mount: Deactivated successfully. Jan 17 00:43:50.280302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265118251.mount: Deactivated successfully. Jan 17 00:43:50.351861 containerd[1588]: time="2026-01-17T00:43:50.351349588Z" level=info msg="CreateContainer within sandbox \"9dd5d5f6b1483f8973a58bf33d35c7020bfad8fe3201771c4c70bcef90a84a2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"00b7fba06744c90f83bc8332103b092b264a3d05e5c56bafe5831942eadf11e5\"" Jan 17 00:43:50.358456 containerd[1588]: time="2026-01-17T00:43:50.358408144Z" level=info msg="StartContainer for \"00b7fba06744c90f83bc8332103b092b264a3d05e5c56bafe5831942eadf11e5\"" Jan 17 00:43:50.428772 containerd[1588]: time="2026-01-17T00:43:50.426095757Z" level=info msg="CreateContainer within sandbox \"612972b69e307d88da58f27362d0257922bfb36ab50b00036484e8f2dda7ee19\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7c0834a908682e438d106d7084ef94a241782a4fd0516eba29730f46ab9a6de5\"" Jan 17 00:43:50.428772 containerd[1588]: time="2026-01-17T00:43:50.427751587Z" level=info msg="StartContainer for \"7c0834a908682e438d106d7084ef94a241782a4fd0516eba29730f46ab9a6de5\"" Jan 17 00:43:50.789774 containerd[1588]: time="2026-01-17T00:43:50.789555521Z" level=info msg="StartContainer for \"00b7fba06744c90f83bc8332103b092b264a3d05e5c56bafe5831942eadf11e5\" returns successfully" Jan 17 00:43:50.897316 containerd[1588]: time="2026-01-17T00:43:50.896501769Z" level=info msg="StartContainer for \"7c0834a908682e438d106d7084ef94a241782a4fd0516eba29730f46ab9a6de5\" returns successfully" Jan 17 00:43:51.120300 kubelet[2796]: E0117 00:43:51.118318 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:51.152441 kubelet[2796]: E0117 00:43:51.152295 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:52.155424 kubelet[2796]: E0117 00:43:52.155331 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:52.160461 kubelet[2796]: E0117 00:43:52.156162 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:53.176096 kubelet[2796]: E0117 00:43:53.174578 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:54.788063 systemd[1]: Started sshd@18-10.0.0.123:22-10.0.0.1:50492.service - OpenSSH per-connection server daemon (10.0.0.1:50492). Jan 17 00:43:54.943600 sshd[6145]: Accepted publickey for core from 10.0.0.1 port 50492 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:43:54.975892 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:55.008130 systemd-logind[1555]: New session 19 of user core. Jan 17 00:43:55.022588 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:43:55.644023 sshd[6145]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:55.654964 systemd[1]: sshd@18-10.0.0.123:22-10.0.0.1:50492.service: Deactivated successfully. Jan 17 00:43:55.666997 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:43:55.670869 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:43:55.683049 systemd-logind[1555]: Removed session 19. Jan 17 00:43:56.591853 kubelet[2796]: E0117 00:43:56.590713 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:57.403526 systemd[1]: run-containerd-runc-k8s.io-7624f2e0df80dd58c9938d1a589e4d87fb65854e8eb926957f18f034d40f69da-runc.hVYOkF.mount: Deactivated successfully. Jan 17 00:43:59.403913 kubelet[2796]: E0117 00:43:59.397314 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:44:00.679779 systemd[1]: Started sshd@19-10.0.0.123:22-10.0.0.1:50496.service - OpenSSH per-connection server daemon (10.0.0.1:50496). Jan 17 00:44:00.945013 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 50496 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:00.948160 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:00.982877 systemd-logind[1555]: New session 20 of user core. Jan 17 00:44:00.995633 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:44:01.365642 sshd[6199]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:01.372532 systemd[1]: sshd@19-10.0.0.123:22-10.0.0.1:50496.service: Deactivated successfully. Jan 17 00:44:01.383813 kubelet[2796]: E0117 00:44:01.383430 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:01.384986 kubelet[2796]: E0117 00:44:01.384938 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:44:01.387154 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:44:01.393366 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:44:01.395713 systemd-logind[1555]: Removed session 20. Jan 17 00:44:01.772941 kubelet[2796]: E0117 00:44:01.772739 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:02.415288 containerd[1588]: time="2026-01-17T00:44:02.415023369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:44:02.534366 containerd[1588]: time="2026-01-17T00:44:02.534071171Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:02.539577 containerd[1588]: time="2026-01-17T00:44:02.538962367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:44:02.539577 containerd[1588]: time="2026-01-17T00:44:02.539119390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:44:02.539815 kubelet[2796]: E0117 00:44:02.539509 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:02.539815 kubelet[2796]: E0117 00:44:02.539733 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:44:02.541176 kubelet[2796]: E0117 00:44:02.539927 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f6b2884151de4457ac6d07787b37b4d8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:02.554845 containerd[1588]: time="2026-01-17T00:44:02.553276538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:44:02.688030 containerd[1588]: time="2026-01-17T00:44:02.687751775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:02.701110 containerd[1588]: time="2026-01-17T00:44:02.700299150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:44:02.701110 containerd[1588]: time="2026-01-17T00:44:02.700434593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:02.706758 kubelet[2796]: E0117 00:44:02.705981 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:02.706758 kubelet[2796]: E0117 00:44:02.706137 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:44:02.706758 kubelet[2796]: E0117 00:44:02.706456 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb8df59d-7qm96_calico-system(63b9bfd8-2242-41c2-9f61-17499f636020): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:02.710536 kubelet[2796]: E0117 00:44:02.710052 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:44:03.397772 containerd[1588]: time="2026-01-17T00:44:03.397725312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:44:03.516520 containerd[1588]: time="2026-01-17T00:44:03.516373419Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:03.523380 containerd[1588]: time="2026-01-17T00:44:03.523063320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:44:03.523380 containerd[1588]: time="2026-01-17T00:44:03.523309118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:44:03.528962 kubelet[2796]: E0117 00:44:03.528022 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:03.528962 kubelet[2796]: E0117 00:44:03.528103 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:44:03.528962 kubelet[2796]: E0117 00:44:03.528356 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjs9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-744d6dbcbc-9t986_calico-system(5e3f00cb-8452-40aa-ab56-8dc0975dc08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:03.533776 kubelet[2796]: E0117 00:44:03.530408 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:44:04.390075 kubelet[2796]: E0117 00:44:04.389900 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:44:04.391943 kubelet[2796]: E0117 00:44:04.390793 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:44:06.392077 systemd[1]: Started sshd@20-10.0.0.123:22-10.0.0.1:42142.service - OpenSSH per-connection server daemon (10.0.0.1:42142). Jan 17 00:44:06.579524 sshd[6239]: Accepted publickey for core from 10.0.0.1 port 42142 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:06.586514 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:06.607388 kubelet[2796]: E0117 00:44:06.607353 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:06.613785 systemd-logind[1555]: New session 21 of user core. Jan 17 00:44:06.635064 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:44:06.882098 sshd[6239]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:06.893097 systemd[1]: sshd@20-10.0.0.123:22-10.0.0.1:42142.service: Deactivated successfully. Jan 17 00:44:06.902922 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:44:06.906126 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:44:06.908707 systemd-logind[1555]: Removed session 21. Jan 17 00:44:07.327144 kubelet[2796]: E0117 00:44:07.324744 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:10.384890 containerd[1588]: time="2026-01-17T00:44:10.384831470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:44:10.459532 containerd[1588]: time="2026-01-17T00:44:10.459458286Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:10.461418 containerd[1588]: time="2026-01-17T00:44:10.461295113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:44:10.461603 containerd[1588]: time="2026-01-17T00:44:10.461443039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:44:10.461978 kubelet[2796]: E0117 00:44:10.461872 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:10.462637 kubelet[2796]: E0117 00:44:10.461994 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:44:10.462637 kubelet[2796]: E0117 00:44:10.462291 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:10.465547 containerd[1588]: time="2026-01-17T00:44:10.465408878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:44:10.541031 containerd[1588]: time="2026-01-17T00:44:10.540888021Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:10.543520 containerd[1588]: time="2026-01-17T00:44:10.543327145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:44:10.543520 containerd[1588]: time="2026-01-17T00:44:10.543480341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:44:10.543747 kubelet[2796]: E0117 00:44:10.543630 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:10.543849 kubelet[2796]: E0117 00:44:10.543742 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:44:10.544046 kubelet[2796]: E0117 00:44:10.543867 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9l4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7kh68_calico-system(1d6f5cd7-ec64-4020-903c-bd9456eec0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:10.545703 kubelet[2796]: E0117 00:44:10.545601 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:44:11.893718 systemd[1]: Started sshd@21-10.0.0.123:22-10.0.0.1:42158.service - OpenSSH per-connection server daemon (10.0.0.1:42158). Jan 17 00:44:11.939565 sshd[6259]: Accepted publickey for core from 10.0.0.1 port 42158 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:11.942455 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:11.959436 systemd-logind[1555]: New session 22 of user core. Jan 17 00:44:11.977959 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:44:12.185918 sshd[6259]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:12.192619 systemd[1]: sshd@21-10.0.0.123:22-10.0.0.1:42158.service: Deactivated successfully. Jan 17 00:44:12.200487 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:44:12.201579 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:44:12.204067 systemd-logind[1555]: Removed session 22. Jan 17 00:44:13.384114 kubelet[2796]: E0117 00:44:13.383765 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:13.386147 containerd[1588]: time="2026-01-17T00:44:13.385718139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:13.476454 containerd[1588]: time="2026-01-17T00:44:13.476368198Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:13.481980 containerd[1588]: time="2026-01-17T00:44:13.481810262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:13.482109 containerd[1588]: time="2026-01-17T00:44:13.481890016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:13.482313 kubelet[2796]: E0117 00:44:13.482162 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:13.482313 kubelet[2796]: E0117 00:44:13.482312 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:13.482746 kubelet[2796]: E0117 00:44:13.482486 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nk56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-bk9rz_calico-apiserver(aa612b8d-2f4c-467c-9d4c-78c8e06b8f95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:13.484699 kubelet[2796]: E0117 00:44:13.484305 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:44:15.392325 kubelet[2796]: E0117 00:44:15.392257 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:44:16.385411 kubelet[2796]: E0117 00:44:16.384399 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:16.385798 kubelet[2796]: E0117 00:44:16.385721 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:44:17.205714 systemd[1]: Started sshd@22-10.0.0.123:22-10.0.0.1:45822.service - OpenSSH per-connection server daemon (10.0.0.1:45822). Jan 17 00:44:17.388908 sshd[6281]: Accepted publickey for core from 10.0.0.1 port 45822 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:17.387451 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:17.397049 kubelet[2796]: E0117 00:44:17.388964 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:44:17.414477 systemd-logind[1555]: New session 23 of user core. Jan 17 00:44:17.427162 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:44:17.940464 sshd[6281]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:17.977502 systemd[1]: Started sshd@23-10.0.0.123:22-10.0.0.1:45830.service - OpenSSH per-connection server daemon (10.0.0.1:45830). Jan 17 00:44:17.979477 systemd[1]: sshd@22-10.0.0.123:22-10.0.0.1:45822.service: Deactivated successfully. Jan 17 00:44:18.000100 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:44:18.004302 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:44:18.012024 systemd-logind[1555]: Removed session 23. Jan 17 00:44:18.071603 sshd[6297]: Accepted publickey for core from 10.0.0.1 port 45830 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:18.075042 sshd[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:18.092119 systemd-logind[1555]: New session 24 of user core. Jan 17 00:44:18.101726 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:44:18.386382 kubelet[2796]: E0117 00:44:18.386002 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:44:19.287489 sshd[6297]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:19.307858 systemd[1]: Started sshd@24-10.0.0.123:22-10.0.0.1:45834.service - OpenSSH per-connection server daemon (10.0.0.1:45834). Jan 17 00:44:19.310007 systemd[1]: sshd@23-10.0.0.123:22-10.0.0.1:45830.service: Deactivated successfully. Jan 17 00:44:19.316902 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:44:19.319711 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:44:19.327458 systemd-logind[1555]: Removed session 24. Jan 17 00:44:19.383984 kubelet[2796]: E0117 00:44:19.383864 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:19.453282 sshd[6311]: Accepted publickey for core from 10.0.0.1 port 45834 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:19.466100 sshd[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:19.478917 systemd-logind[1555]: New session 25 of user core. Jan 17 00:44:19.496413 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:44:20.943893 sshd[6311]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:20.968352 systemd[1]: Started sshd@25-10.0.0.123:22-10.0.0.1:45850.service - OpenSSH per-connection server daemon (10.0.0.1:45850). Jan 17 00:44:20.972400 systemd[1]: sshd@24-10.0.0.123:22-10.0.0.1:45834.service: Deactivated successfully. Jan 17 00:44:20.990883 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:44:20.999095 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:44:21.005444 systemd-logind[1555]: Removed session 25. Jan 17 00:44:21.088970 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 45850 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:21.095062 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:21.114798 systemd-logind[1555]: New session 26 of user core. Jan 17 00:44:21.126994 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:44:21.996980 sshd[6332]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:22.018774 systemd[1]: Started sshd@26-10.0.0.123:22-10.0.0.1:45860.service - OpenSSH per-connection server daemon (10.0.0.1:45860). Jan 17 00:44:22.021374 systemd[1]: sshd@25-10.0.0.123:22-10.0.0.1:45850.service: Deactivated successfully. Jan 17 00:44:22.024794 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:44:22.038287 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:44:22.041447 systemd-logind[1555]: Removed session 26. Jan 17 00:44:22.091789 sshd[6349]: Accepted publickey for core from 10.0.0.1 port 45860 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:22.095371 sshd[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:22.110582 systemd-logind[1555]: New session 27 of user core. Jan 17 00:44:22.127390 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:44:22.534305 sshd[6349]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:22.543315 systemd[1]: sshd@26-10.0.0.123:22-10.0.0.1:45860.service: Deactivated successfully. Jan 17 00:44:22.565640 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:44:22.566341 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:44:22.573516 systemd-logind[1555]: Removed session 27. Jan 17 00:44:23.385139 kubelet[2796]: E0117 00:44:23.385105 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:23.391541 kubelet[2796]: E0117 00:44:23.390370 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:44:26.390644 kubelet[2796]: E0117 00:44:26.390265 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:44:27.553887 systemd[1]: Started sshd@27-10.0.0.123:22-10.0.0.1:54522.service - OpenSSH per-connection server daemon (10.0.0.1:54522). Jan 17 00:44:27.676815 sshd[6391]: Accepted publickey for core from 10.0.0.1 port 54522 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:27.679531 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:27.700916 systemd-logind[1555]: New session 28 of user core. Jan 17 00:44:27.728095 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:44:28.102112 sshd[6391]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:28.125436 systemd[1]: sshd@27-10.0.0.123:22-10.0.0.1:54522.service: Deactivated successfully. Jan 17 00:44:28.147956 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:44:28.155983 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:44:28.168908 systemd-logind[1555]: Removed session 28. Jan 17 00:44:28.408618 kubelet[2796]: E0117 00:44:28.403503 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:44:29.422027 kubelet[2796]: E0117 00:44:29.418151 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:44:30.399759 containerd[1588]: time="2026-01-17T00:44:30.395127762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:44:30.519601 containerd[1588]: time="2026-01-17T00:44:30.519029103Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:30.532100 containerd[1588]: time="2026-01-17T00:44:30.531905149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:44:30.532100 containerd[1588]: time="2026-01-17T00:44:30.532035692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:30.534608 kubelet[2796]: E0117 00:44:30.533644 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:30.534608 kubelet[2796]: E0117 00:44:30.533759 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:44:30.534608 kubelet[2796]: E0117 00:44:30.533969 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b478fd4fd-xbslh_calico-apiserver(907968b1-857c-479e-a0ab-2b58db52b182): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:30.535790 kubelet[2796]: E0117 00:44:30.535724 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:44:31.398796 containerd[1588]: time="2026-01-17T00:44:31.397590463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:44:31.492423 containerd[1588]: time="2026-01-17T00:44:31.492296637Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:44:31.513871 containerd[1588]: time="2026-01-17T00:44:31.508008076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:44:31.513871 containerd[1588]: time="2026-01-17T00:44:31.508145863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:44:31.550353 kubelet[2796]: E0117 00:44:31.533604 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:31.550353 kubelet[2796]: E0117 00:44:31.547948 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:44:31.550353 kubelet[2796]: E0117 00:44:31.548804 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qc8z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g2n27_calico-system(12e8789b-d87c-447d-950e-1991d31141d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:44:31.550353 kubelet[2796]: E0117 00:44:31.549922 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:44:33.140786 systemd[1]: Started sshd@28-10.0.0.123:22-10.0.0.1:60696.service - OpenSSH per-connection server daemon (10.0.0.1:60696). Jan 17 00:44:33.281913 sshd[6407]: Accepted publickey for core from 10.0.0.1 port 60696 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:33.281533 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:33.307955 systemd-logind[1555]: New session 29 of user core. Jan 17 00:44:33.311045 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:44:33.653465 sshd[6407]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:33.678278 systemd[1]: sshd@28-10.0.0.123:22-10.0.0.1:60696.service: Deactivated successfully. Jan 17 00:44:33.689846 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:44:33.690122 systemd-logind[1555]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:44:33.708012 systemd-logind[1555]: Removed session 29. Jan 17 00:44:36.393637 kubelet[2796]: E0117 00:44:36.391583 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:44:38.384242 kubelet[2796]: E0117 00:44:38.384044 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:44:38.693033 systemd[1]: Started sshd@29-10.0.0.123:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Jan 17 00:44:38.888058 sshd[6424]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:38.899037 sshd[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:38.914430 systemd-logind[1555]: New session 30 of user core. Jan 17 00:44:38.926968 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:44:39.394879 kubelet[2796]: E0117 00:44:39.394772 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:44:39.852334 sshd[6424]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:39.876990 systemd[1]: sshd@29-10.0.0.123:22-10.0.0.1:60704.service: Deactivated successfully. Jan 17 00:44:39.884117 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:44:39.884654 systemd-logind[1555]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:44:39.887560 systemd-logind[1555]: Removed session 30. Jan 17 00:44:40.384849 kubelet[2796]: E0117 00:44:40.382069 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:41.401526 kubelet[2796]: E0117 00:44:41.398496 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:44:43.395838 kubelet[2796]: E0117 00:44:43.392033 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:44:44.387163 kubelet[2796]: E0117 00:44:44.384336 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:44:44.875952 systemd[1]: Started sshd@30-10.0.0.123:22-10.0.0.1:58330.service - OpenSSH per-connection server daemon (10.0.0.1:58330). Jan 17 00:44:44.956274 sshd[6443]: Accepted publickey for core from 10.0.0.1 port 58330 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:44.961583 sshd[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:44.983943 systemd-logind[1555]: New session 31 of user core. Jan 17 00:44:44.997419 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 00:44:45.433109 sshd[6443]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:45.469820 systemd[1]: sshd@30-10.0.0.123:22-10.0.0.1:58330.service: Deactivated successfully. Jan 17 00:44:45.484011 systemd-logind[1555]: Session 31 logged out. Waiting for processes to exit. Jan 17 00:44:45.486442 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 00:44:45.496396 systemd-logind[1555]: Removed session 31. Jan 17 00:44:50.474172 systemd[1]: Started sshd@31-10.0.0.123:22-10.0.0.1:58344.service - OpenSSH per-connection server daemon (10.0.0.1:58344). Jan 17 00:44:50.732996 sshd[6459]: Accepted publickey for core from 10.0.0.1 port 58344 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:50.746466 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:50.787903 systemd-logind[1555]: New session 32 of user core. Jan 17 00:44:50.795900 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 00:44:51.244533 sshd[6459]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:51.289644 systemd[1]: sshd@31-10.0.0.123:22-10.0.0.1:58344.service: Deactivated successfully. Jan 17 00:44:51.317583 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 00:44:51.321014 systemd-logind[1555]: Session 32 logged out. Waiting for processes to exit. Jan 17 00:44:51.323768 systemd-logind[1555]: Removed session 32. Jan 17 00:44:51.392302 kubelet[2796]: E0117 00:44:51.390394 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-bk9rz" podUID="aa612b8d-2f4c-467c-9d4c-78c8e06b8f95" Jan 17 00:44:51.399110 kubelet[2796]: E0117 00:44:51.395572 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kh68" podUID="1d6f5cd7-ec64-4020-903c-bd9456eec0b4" Jan 17 00:44:54.392074 kubelet[2796]: E0117 00:44:54.389391 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2n27" podUID="12e8789b-d87c-447d-950e-1991d31141d1" Jan 17 00:44:54.399885 kubelet[2796]: E0117 00:44:54.395952 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b478fd4fd-xbslh" podUID="907968b1-857c-479e-a0ab-2b58db52b182" Jan 17 00:44:54.400843 kubelet[2796]: E0117 00:44:54.400519 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb8df59d-7qm96" podUID="63b9bfd8-2242-41c2-9f61-17499f636020" Jan 17 00:44:55.384447 kubelet[2796]: E0117 00:44:55.383010 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-744d6dbcbc-9t986" podUID="5e3f00cb-8452-40aa-ab56-8dc0975dc08a" Jan 17 00:44:56.279834 systemd[1]: Started sshd@32-10.0.0.123:22-10.0.0.1:49834.service - OpenSSH per-connection server daemon (10.0.0.1:49834). Jan 17 00:44:56.396171 sshd[6475]: Accepted publickey for core from 10.0.0.1 port 49834 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:44:56.402546 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:56.430152 systemd-logind[1555]: New session 33 of user core. Jan 17 00:44:56.449386 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 17 00:44:56.902539 sshd[6475]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:56.928068 systemd[1]: sshd@32-10.0.0.123:22-10.0.0.1:49834.service: Deactivated successfully. Jan 17 00:44:56.934838 systemd-logind[1555]: Session 33 logged out. Waiting for processes to exit. Jan 17 00:44:56.935940 systemd[1]: session-33.scope: Deactivated successfully. Jan 17 00:44:56.945595 systemd-logind[1555]: Removed session 33.