Jan 20 00:52:53.996065 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:52:53.996086 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:52:53.996096 kernel: BIOS-provided physical RAM map: Jan 20 00:52:53.996102 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 00:52:53.996107 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 00:52:53.996112 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 00:52:53.996118 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 00:52:53.996124 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 00:52:53.996130 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:52:53.996137 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 00:52:53.996143 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 00:52:53.996148 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 00:52:53.996154 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 00:52:53.996159 kernel: NX (Execute Disable) protection: active Jan 20 00:52:53.996166 kernel: APIC: Static calls initialized Jan 20 00:52:53.996174 kernel: SMBIOS 2.8 present. Jan 20 00:52:53.996180 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 00:52:53.996186 kernel: Hypervisor detected: KVM Jan 20 00:52:53.996191 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:52:53.996197 kernel: kvm-clock: using sched offset of 4154248121 cycles Jan 20 00:52:53.996203 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:52:53.996209 kernel: tsc: Detected 2445.424 MHz processor Jan 20 00:52:53.996215 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:52:53.996221 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:52:53.996227 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 00:52:53.996235 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 00:52:53.996241 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:52:53.996247 kernel: Using GB pages for direct mapping Jan 20 00:52:53.996253 kernel: ACPI: Early table checksum verification disabled Jan 20 00:52:53.996259 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 00:52:53.996265 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:52:53.996271 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:52:53.996277 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:52:53.996285 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 00:52:53.996291 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:52:53.996297 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:52:53.996303 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:52:53.996309 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:52:53.996314 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 00:52:53.996321 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 00:52:53.996330 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 00:52:53.996338 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 00:52:53.996344 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 00:52:53.996350 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 00:52:53.996392 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 00:52:53.996399 kernel: No NUMA configuration found Jan 20 00:52:53.996405 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 00:52:53.996411 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 20 00:52:53.996420 kernel: Zone ranges: Jan 20 00:52:53.996427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:52:53.996433 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 00:52:53.996439 kernel: Normal empty Jan 20 00:52:53.996445 kernel: Movable zone start for each node Jan 20 00:52:53.996451 kernel: Early memory node ranges Jan 20 00:52:53.996457 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 00:52:53.996463 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 00:52:53.996469 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 00:52:53.996478 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:52:53.996484 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 00:52:53.996490 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 00:52:53.996496 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:52:53.996502 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:52:53.996508 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:52:53.996515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:52:53.996521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:52:53.996527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:52:53.996535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:52:53.996541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:52:53.996547 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:52:53.996554 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:52:53.996560 kernel: TSC deadline timer available Jan 20 00:52:53.996566 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:52:53.996572 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:52:53.996578 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:52:53.996584 kernel: kvm-guest: setup PV sched yield Jan 20 00:52:53.996590 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 00:52:53.996599 kernel: Booting paravirtualized kernel on KVM Jan 20 00:52:53.996605 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:52:53.996611 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:52:53.996617 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:52:53.996623 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:52:53.996629 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:52:53.996635 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:52:53.996642 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:52:53.996649 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:52:53.996657 kernel: random: crng init done Jan 20 00:52:53.996663 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:52:53.996670 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:52:53.996676 kernel: Fallback order for Node 0: 0 Jan 20 00:52:53.996682 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 20 00:52:53.996688 kernel: Policy zone: DMA32 Jan 20 00:52:53.996694 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:52:53.996700 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 136884K reserved, 0K cma-reserved) Jan 20 00:52:53.996709 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:52:53.996715 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:52:53.996721 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:52:53.996727 kernel: Dynamic Preempt: voluntary Jan 20 00:52:53.996733 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:52:53.996740 kernel: rcu: RCU event tracing is enabled. Jan 20 00:52:53.996747 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:52:53.996753 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:52:53.996759 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:52:53.996768 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:52:53.996774 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:52:53.996780 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:52:53.996786 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:52:53.996792 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:52:53.996798 kernel: Console: colour VGA+ 80x25 Jan 20 00:52:53.996805 kernel: printk: console [ttyS0] enabled Jan 20 00:52:53.996811 kernel: ACPI: Core revision 20230628 Jan 20 00:52:53.996817 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:52:53.996825 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:52:53.996831 kernel: x2apic enabled Jan 20 00:52:53.996837 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:52:53.996844 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:52:53.996850 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:52:53.996856 kernel: kvm-guest: setup PV IPIs Jan 20 00:52:53.996862 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:52:53.996878 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:52:53.996884 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 00:52:53.996891 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:52:53.996897 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:52:53.996903 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:52:53.996912 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:52:53.996918 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:52:53.996945 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:52:53.996952 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:52:53.996958 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:52:53.996968 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:52:53.996974 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:52:53.996981 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:52:53.996987 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:52:53.996994 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:52:53.997000 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:52:53.997007 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:52:53.997013 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:52:53.997021 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:52:53.997028 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:52:53.997034 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:52:53.997041 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:52:53.997047 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:52:53.997054 kernel: landlock: Up and running. Jan 20 00:52:53.997060 kernel: SELinux: Initializing. Jan 20 00:52:53.997066 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:52:53.997073 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:52:53.997082 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:52:53.997088 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:52:53.997095 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:52:53.997101 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:52:53.997108 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:52:53.997114 kernel: signal: max sigframe size: 1776 Jan 20 00:52:53.997121 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:52:53.997127 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:52:53.997134 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:52:53.997142 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:52:53.997149 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:52:53.997155 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:52:53.997161 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:52:53.997168 kernel: smpboot: Max logical packages: 1 Jan 20 00:52:53.997174 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 00:52:53.997181 kernel: devtmpfs: initialized Jan 20 00:52:53.997187 kernel: x86/mm: Memory block size: 128MB Jan 20 00:52:53.997193 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:52:53.997202 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:52:53.997209 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:52:53.997215 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:52:53.997221 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:52:53.997228 kernel: audit: type=2000 audit(1768870373.021:1): state=initialized audit_enabled=0 res=1 Jan 20 00:52:53.997234 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:52:53.997241 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:52:53.997247 kernel: cpuidle: using governor menu Jan 20 00:52:53.997253 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:52:53.997262 kernel: dca service started, version 1.12.1 Jan 20 00:52:53.997269 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:52:53.997275 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:52:53.997282 kernel: PCI: Using configuration type 1 for base access Jan 20 00:52:53.997288 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:52:53.997295 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:52:53.997301 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:52:53.997307 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:52:53.997314 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:52:53.997322 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:52:53.997329 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:52:53.997335 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:52:53.997342 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:52:53.997348 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:52:53.997443 kernel: ACPI: Interpreter enabled Jan 20 00:52:53.997452 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:52:53.997459 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:52:53.997465 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:52:53.997475 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:52:53.997481 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:52:53.997488 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:52:53.997662 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:52:53.997793 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:52:53.997915 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:52:53.997952 kernel: PCI host bridge to bus 0000:00 Jan 20 00:52:53.998087 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:52:53.998199 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:52:53.998309 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:52:53.998464 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:52:53.998576 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:52:53.998684 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 00:52:53.998790 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:52:53.998965 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:52:53.999101 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:52:53.999222 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 20 00:52:53.999339 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 20 00:52:53.999535 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 20 00:52:53.999658 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:52:53.999795 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:52:53.999916 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 00:52:54.000070 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 20 00:52:54.000190 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 00:52:54.000315 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:52:54.000486 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 00:52:54.000607 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 20 00:52:54.000730 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 00:52:54.000855 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:52:54.001005 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 20 00:52:54.001125 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 20 00:52:54.001242 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 00:52:54.001484 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 20 00:52:54.001639 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:52:54.001766 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:52:54.001892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:52:54.002046 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 20 00:52:54.002201 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 20 00:52:54.002459 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:52:54.002588 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 00:52:54.002597 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:52:54.002608 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:52:54.002615 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:52:54.002622 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:52:54.002628 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:52:54.002635 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:52:54.002641 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:52:54.002647 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:52:54.002654 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:52:54.002660 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:52:54.002669 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:52:54.002675 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:52:54.002682 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:52:54.002688 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:52:54.002694 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:52:54.002701 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:52:54.002707 kernel: iommu: Default domain type: Translated Jan 20 00:52:54.002714 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:52:54.002720 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:52:54.002729 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:52:54.002736 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 00:52:54.002742 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 00:52:54.002860 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:52:54.003010 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:52:54.003132 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:52:54.003141 kernel: vgaarb: loaded Jan 20 00:52:54.003148 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:52:54.003158 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:52:54.003165 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:52:54.003171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:52:54.003178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:52:54.003184 kernel: pnp: PnP ACPI init Jan 20 00:52:54.003310 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:52:54.003320 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:52:54.003327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:52:54.003337 kernel: NET: Registered PF_INET protocol family Jan 20 00:52:54.003343 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:52:54.003350 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:52:54.003397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:52:54.003404 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:52:54.003411 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:52:54.003417 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:52:54.003424 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:52:54.003430 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:52:54.003440 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:52:54.003447 kernel: NET: Registered PF_XDP protocol family Jan 20 00:52:54.003564 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:52:54.003673 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:52:54.003781 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:52:54.003889 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:52:54.004026 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:52:54.004135 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 00:52:54.004148 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:52:54.004155 kernel: Initialise system trusted keyrings Jan 20 00:52:54.004161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:52:54.004168 kernel: Key type asymmetric registered Jan 20 00:52:54.004174 kernel: Asymmetric key parser 'x509' registered Jan 20 00:52:54.004180 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:52:54.004187 kernel: io scheduler mq-deadline registered Jan 20 00:52:54.004193 kernel: io scheduler kyber registered Jan 20 00:52:54.004200 kernel: io scheduler bfq registered Jan 20 00:52:54.004209 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:52:54.004215 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:52:54.004222 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:52:54.004229 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:52:54.004235 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:52:54.004242 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:52:54.004248 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:52:54.004255 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:52:54.004261 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:52:54.004431 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:52:54.004443 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:52:54.004557 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:52:54.004668 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:52:53 UTC (1768870373) Jan 20 00:52:54.004780 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:52:54.004788 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:52:54.004794 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:52:54.004801 kernel: Segment Routing with IPv6 Jan 20 00:52:54.004811 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:52:54.004818 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:52:54.004824 kernel: Key type dns_resolver registered Jan 20 00:52:54.004830 kernel: IPI shorthand broadcast: enabled Jan 20 00:52:54.004837 kernel: sched_clock: Marking stable (1102014373, 307304518)->(1545966324, -136647433) Jan 20 00:52:54.004843 kernel: registered taskstats version 1 Jan 20 00:52:54.004850 kernel: Loading compiled-in X.509 certificates Jan 20 00:52:54.004856 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:52:54.004863 kernel: Key type .fscrypt registered Jan 20 00:52:54.004872 kernel: Key type fscrypt-provisioning registered Jan 20 00:52:54.004878 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:52:54.004885 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:52:54.004891 kernel: ima: No architecture policies found Jan 20 00:52:54.004897 kernel: clk: Disabling unused clocks Jan 20 00:52:54.004904 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:52:54.004910 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:52:54.004917 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:52:54.004949 kernel: Run /init as init process Jan 20 00:52:54.004960 kernel: with arguments: Jan 20 00:52:54.004970 kernel: /init Jan 20 00:52:54.004976 kernel: with environment: Jan 20 00:52:54.004983 kernel: HOME=/ Jan 20 00:52:54.004991 kernel: TERM=linux Jan 20 00:52:54.005000 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:52:54.005008 systemd[1]: Detected virtualization kvm. Jan 20 00:52:54.005015 systemd[1]: Detected architecture x86-64. Jan 20 00:52:54.005024 systemd[1]: Running in initrd. Jan 20 00:52:54.005031 systemd[1]: No hostname configured, using default hostname. Jan 20 00:52:54.005038 systemd[1]: Hostname set to . Jan 20 00:52:54.005045 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:52:54.005051 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:52:54.005058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:52:54.005065 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:52:54.005073 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:52:54.005082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:52:54.005089 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:52:54.005097 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:52:54.005105 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:52:54.005112 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:52:54.005119 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:52:54.005128 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:52:54.005135 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:52:54.005141 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:52:54.005148 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:52:54.005166 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:52:54.005176 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:52:54.005183 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:52:54.005192 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:52:54.005200 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:52:54.005207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:52:54.005214 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:52:54.005221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:52:54.005228 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:52:54.005235 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:52:54.005242 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:52:54.005251 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:52:54.005258 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:52:54.005265 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:52:54.005275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:52:54.005282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:52:54.005308 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:52:54.005326 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:52:54.005333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:52:54.005341 systemd-journald[194]: Journal started Jan 20 00:52:54.005390 systemd-journald[194]: Runtime Journal (/run/log/journal/dcf0116742b043d2a2bedd9fef431969) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:52:54.008409 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:52:54.012286 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:52:54.014799 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:52:54.017133 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:52:54.026145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:52:54.031570 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:52:54.165618 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:52:54.165643 kernel: Bridge firewalling registered Jan 20 00:52:54.036020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:52:54.049295 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:52:54.154028 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:52:54.158996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:52:54.173571 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:52:54.181663 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:52:54.186835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:52:54.190609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:52:54.208429 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:52:54.211732 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:52:54.227628 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:52:54.231448 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:52:54.243757 dracut-cmdline[228]: dracut-dracut-053 Jan 20 00:52:54.247165 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:52:54.267703 systemd-resolved[231]: Positive Trust Anchors: Jan 20 00:52:54.267738 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:52:54.267765 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:52:54.270123 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 20 00:52:54.271311 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:52:54.273609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:52:54.365427 kernel: SCSI subsystem initialized Jan 20 00:52:54.375437 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:52:54.387454 kernel: iscsi: registered transport (tcp) Jan 20 00:52:54.407993 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:52:54.408031 kernel: QLogic iSCSI HBA Driver Jan 20 00:52:54.457442 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:52:54.474582 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:52:54.505958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:52:54.505989 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:52:54.508667 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:52:54.552449 kernel: raid6: avx2x4 gen() 30741 MB/s Jan 20 00:52:54.570434 kernel: raid6: avx2x2 gen() 24747 MB/s Jan 20 00:52:54.589220 kernel: raid6: avx2x1 gen() 26008 MB/s Jan 20 00:52:54.589244 kernel: raid6: using algorithm avx2x4 gen() 30741 MB/s Jan 20 00:52:54.608284 kernel: raid6: .... xor() 5143 MB/s, rmw enabled Jan 20 00:52:54.608312 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:52:54.628425 kernel: xor: automatically using best checksumming function avx Jan 20 00:52:54.774439 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:52:54.788304 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:52:54.800640 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:52:54.813033 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 20 00:52:54.817686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:52:54.818851 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:52:54.843959 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 20 00:52:54.879143 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:52:54.895545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:52:54.962539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:52:54.974894 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:52:54.984985 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:52:54.988567 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:52:54.992777 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:52:55.001759 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:52:55.017393 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:52:55.024538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:52:55.031456 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:52:55.035853 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:52:55.034281 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:52:55.046453 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:52:55.046487 kernel: GPT:9289727 != 19775487 Jan 20 00:52:55.046498 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:52:55.046507 kernel: GPT:9289727 != 19775487 Jan 20 00:52:55.034448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:52:55.057010 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:52:55.057049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:52:55.050722 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:52:55.056866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:52:55.057133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:52:55.063664 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:52:55.079807 kernel: libata version 3.00 loaded. Jan 20 00:52:55.080693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:52:55.093753 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:52:55.093980 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:52:55.103143 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:52:55.103317 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:52:55.105453 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:52:55.112439 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Jan 20 00:52:55.116414 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:52:55.116437 kernel: AES CTR mode by8 optimization enabled Jan 20 00:52:55.124753 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (461) Jan 20 00:52:55.124783 kernel: scsi host0: ahci Jan 20 00:52:55.124734 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:52:55.260216 kernel: scsi host1: ahci Jan 20 00:52:55.260471 kernel: scsi host2: ahci Jan 20 00:52:55.260665 kernel: scsi host3: ahci Jan 20 00:52:55.260826 kernel: scsi host4: ahci Jan 20 00:52:55.261011 kernel: scsi host5: ahci Jan 20 00:52:55.261156 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Jan 20 00:52:55.261167 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Jan 20 00:52:55.261178 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Jan 20 00:52:55.261188 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Jan 20 00:52:55.261197 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Jan 20 00:52:55.261206 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Jan 20 00:52:55.260575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:52:55.277498 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:52:55.291292 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:52:55.300745 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:52:55.307161 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:52:55.325532 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:52:55.332833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:52:55.341627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:52:55.341646 disk-uuid[558]: Primary Header is updated. Jan 20 00:52:55.341646 disk-uuid[558]: Secondary Entries is updated. Jan 20 00:52:55.341646 disk-uuid[558]: Secondary Header is updated. Jan 20 00:52:55.350815 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:52:55.370971 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:52:55.454398 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:52:55.454446 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:52:55.454457 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:52:55.456430 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:52:55.458436 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:52:55.461699 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:52:55.461719 kernel: ata3.00: applying bridge limits Jan 20 00:52:55.463429 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:52:55.465455 kernel: ata3.00: configured for UDMA/100 Jan 20 00:52:55.469548 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:52:55.525523 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:52:55.525765 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:52:55.540496 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:52:56.347298 disk-uuid[560]: The operation has completed successfully. Jan 20 00:52:56.350850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:52:56.379204 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:52:56.379391 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:52:56.410542 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:52:56.414696 sh[597]: Success Jan 20 00:52:56.429656 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:52:56.467257 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:52:56.479054 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:52:56.482656 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:52:56.497719 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:52:56.497747 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:52:56.497764 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:52:56.502335 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:52:56.502387 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:52:56.510796 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:52:56.513783 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:52:56.525520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:52:56.528526 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:52:56.544192 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:52:56.544212 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:52:56.544222 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:52:56.549397 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:52:56.559618 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:52:56.564979 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:52:56.571112 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:52:56.584552 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:52:56.635487 ignition[699]: Ignition 2.19.0 Jan 20 00:52:56.635500 ignition[699]: Stage: fetch-offline Jan 20 00:52:56.635583 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:52:56.635595 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:52:56.635677 ignition[699]: parsed url from cmdline: "" Jan 20 00:52:56.635681 ignition[699]: no config URL provided Jan 20 00:52:56.635687 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:52:56.635696 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:52:56.635721 ignition[699]: op(1): [started] loading QEMU firmware config module Jan 20 00:52:56.635726 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:52:56.647167 ignition[699]: op(1): [finished] loading QEMU firmware config module Jan 20 00:52:56.670678 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:52:56.690509 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:52:56.716522 systemd-networkd[785]: lo: Link UP Jan 20 00:52:56.716548 systemd-networkd[785]: lo: Gained carrier Jan 20 00:52:56.721904 systemd-networkd[785]: Enumeration completed Jan 20 00:52:56.722100 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:52:56.730141 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:52:56.730149 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:52:56.731065 systemd[1]: Reached target network.target - Network. Jan 20 00:52:56.731285 systemd-networkd[785]: eth0: Link UP Jan 20 00:52:56.731290 systemd-networkd[785]: eth0: Gained carrier Jan 20 00:52:56.731297 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:52:56.764416 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:52:56.834535 ignition[699]: parsing config with SHA512: 69bcff4093874e93a9a1931d4a06c9c0518a0ec394eb5b51ea9031da7da7f778189c0e45a1acb9ada28203e7f28ed2a67bfa7afb4a4ddd253c85cb89f668c79e Jan 20 00:52:56.838070 unknown[699]: fetched base config from "system" Jan 20 00:52:56.838090 unknown[699]: fetched user config from "qemu" Jan 20 00:52:56.838468 ignition[699]: fetch-offline: fetch-offline passed Jan 20 00:52:56.840742 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:52:56.838529 ignition[699]: Ignition finished successfully Jan 20 00:52:56.845841 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:52:56.861576 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:52:56.875098 ignition[789]: Ignition 2.19.0 Jan 20 00:52:56.875117 ignition[789]: Stage: kargs Jan 20 00:52:56.875272 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:52:56.875283 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:52:56.875976 ignition[789]: kargs: kargs passed Jan 20 00:52:56.876017 ignition[789]: Ignition finished successfully Jan 20 00:52:56.889348 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:52:56.899529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:52:56.919040 ignition[798]: Ignition 2.19.0 Jan 20 00:52:56.919068 ignition[798]: Stage: disks Jan 20 00:52:56.919336 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:52:56.919350 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:52:56.928100 ignition[798]: disks: disks passed Jan 20 00:52:56.928166 ignition[798]: Ignition finished successfully Jan 20 00:52:56.933117 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:52:56.936015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:52:56.940976 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:52:56.944085 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:52:56.949502 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:52:56.952183 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:52:56.969586 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:52:56.983384 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:52:56.988317 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:52:56.989575 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:52:57.085432 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:52:57.086303 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:52:57.091054 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:52:57.113490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:52:57.120035 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:52:57.133519 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 20 00:52:57.133591 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:52:57.133604 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:52:57.133613 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:52:57.133759 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:52:57.140405 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:52:57.133818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:52:57.133841 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:52:57.152813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:52:57.157779 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:52:57.175527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:52:57.213026 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:52:57.218246 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:52:57.224713 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:52:57.231699 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:52:57.334784 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:52:57.348557 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:52:57.355102 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:52:57.362441 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:52:57.394721 ignition[927]: INFO : Ignition 2.19.0 Jan 20 00:52:57.394721 ignition[927]: INFO : Stage: mount Jan 20 00:52:57.403285 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:52:57.403285 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:52:57.403285 ignition[927]: INFO : mount: mount passed Jan 20 00:52:57.403285 ignition[927]: INFO : Ignition finished successfully Jan 20 00:52:57.397701 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:52:57.403312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:52:57.439487 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:52:57.494057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:52:57.508615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:52:57.517425 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Jan 20 00:52:57.522561 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:52:57.522585 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:52:57.522596 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:52:57.531409 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:52:57.532879 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:52:57.556824 ignition[960]: INFO : Ignition 2.19.0 Jan 20 00:52:57.556824 ignition[960]: INFO : Stage: files Jan 20 00:52:57.560630 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:52:57.560630 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:52:57.560630 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:52:57.570098 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:52:57.570098 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:52:57.578747 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:52:57.582174 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:52:57.585642 unknown[960]: wrote ssh authorized keys file for user: core Jan 20 00:52:57.588468 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:52:57.588468 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:52:57.588468 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 00:52:57.637048 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:52:57.785038 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:52:57.785038 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:52:57.794316 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:52:57.794316 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:52:57.803442 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:52:57.808204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:52:57.812642 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:52:57.812642 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:52:57.821242 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:52:57.825696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:52:57.830169 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:52:57.834481 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:52:57.840706 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:52:57.840706 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:52:57.852162 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 00:52:58.057682 systemd-networkd[785]: eth0: Gained IPv6LL Jan 20 00:52:58.150213 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 00:52:58.945402 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 00:52:58.952738 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:52:59.003743 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:52:59.003743 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:52:59.003743 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:52:59.003743 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:52:59.003743 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:52:59.003743 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:52:59.003743 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:52:59.003743 ignition[960]: INFO : files: files passed Jan 20 00:52:59.003743 ignition[960]: INFO : Ignition finished successfully Jan 20 00:52:58.980614 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:52:59.010592 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:52:59.018994 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:52:59.026263 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:52:59.073624 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:52:59.026428 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:52:59.080434 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:52:59.080434 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:52:59.039467 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:52:59.095322 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:52:59.045561 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:52:59.053924 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:52:59.089546 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:52:59.089688 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:52:59.095653 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:52:59.102586 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:52:59.105555 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:52:59.106870 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:52:59.133630 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:52:59.139998 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:52:59.157605 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:52:59.160720 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:52:59.166906 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:52:59.172163 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:52:59.172287 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:52:59.178126 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:52:59.182613 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:52:59.188305 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:52:59.194015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:52:59.199664 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:52:59.206054 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:52:59.212614 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:52:59.216203 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:52:59.221812 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:52:59.226999 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:52:59.233509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:52:59.233633 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:52:59.239040 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:52:59.244240 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:52:59.249401 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:52:59.249580 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:52:59.255108 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:52:59.255228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:52:59.260891 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:52:59.261038 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:52:59.267486 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:52:59.273204 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:52:59.276478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:52:59.283561 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:52:59.288281 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:52:59.293404 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:52:59.293531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:52:59.299589 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:52:59.299693 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:52:59.306079 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:52:59.348087 ignition[1014]: INFO : Ignition 2.19.0 Jan 20 00:52:59.348087 ignition[1014]: INFO : Stage: umount Jan 20 00:52:59.348087 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:52:59.348087 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:52:59.348087 ignition[1014]: INFO : umount: umount passed Jan 20 00:52:59.348087 ignition[1014]: INFO : Ignition finished successfully Jan 20 00:52:59.306204 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:52:59.312081 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:52:59.312197 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:52:59.330680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:52:59.336032 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:52:59.339560 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:52:59.339922 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:52:59.345420 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:52:59.345579 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:52:59.353975 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:52:59.354105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:52:59.359482 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:52:59.359627 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:52:59.365314 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:52:59.368614 systemd[1]: Stopped target network.target - Network. Jan 20 00:52:59.372259 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:52:59.372334 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:52:59.377467 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:52:59.377518 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:52:59.383479 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:52:59.383531 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:52:59.388504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:52:59.388553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:52:59.393976 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:52:59.399130 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:52:59.402472 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 20 00:52:59.405063 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:52:59.405211 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:52:59.412333 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:52:59.412603 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:52:59.418817 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:52:59.418882 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:52:59.439533 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:52:59.443468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:52:59.443537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:52:59.447124 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:52:59.447183 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:52:59.451964 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:52:59.452019 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:52:59.454839 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:52:59.454889 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:52:59.460834 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:52:59.466817 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:52:59.466972 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:52:59.473177 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:52:59.473251 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:52:59.477846 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:52:59.478064 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:52:59.483324 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:52:59.483494 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:52:59.488563 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:52:59.617173 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:52:59.488639 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:52:59.493401 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:52:59.493447 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:52:59.496128 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:52:59.496184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:52:59.501550 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:52:59.501600 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:52:59.507159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:52:59.507209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:52:59.527738 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:52:59.533005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:52:59.533075 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:52:59.539122 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:52:59.539175 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:52:59.544983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:52:59.545044 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:52:59.548476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:52:59.548525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:52:59.554796 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:52:59.554923 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:52:59.562100 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:52:59.577541 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:52:59.588827 systemd[1]: Switching root. Jan 20 00:52:59.684700 systemd-journald[194]: Journal stopped Jan 20 00:53:00.799738 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:53:00.799816 kernel: SELinux: policy capability open_perms=1 Jan 20 00:53:00.799828 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:53:00.799848 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:53:00.799858 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:53:00.799878 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:53:00.799889 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:53:00.799899 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:53:00.799909 kernel: audit: type=1403 audit(1768870379.772:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:53:00.799921 systemd[1]: Successfully loaded SELinux policy in 47.089ms. Jan 20 00:53:00.799972 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.871ms. Jan 20 00:53:00.799993 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:53:00.800009 systemd[1]: Detected virtualization kvm. Jan 20 00:53:00.800020 systemd[1]: Detected architecture x86-64. Jan 20 00:53:00.800031 systemd[1]: Detected first boot. Jan 20 00:53:00.800042 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:53:00.800053 zram_generator::config[1058]: No configuration found. Jan 20 00:53:00.800065 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:53:00.800076 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:53:00.800089 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:53:00.800103 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:53:00.800114 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:53:00.800125 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:53:00.800136 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:53:00.800146 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:53:00.800157 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:53:00.800168 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:53:00.800179 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:53:00.800192 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:53:00.800203 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:53:00.800214 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:53:00.800225 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:53:00.800235 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:53:00.800246 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:53:00.800257 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:53:00.800267 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:53:00.800278 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:53:00.800289 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:53:00.800303 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:53:00.800314 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:53:00.800326 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:53:00.800337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:53:00.800348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:53:00.800395 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:53:00.800407 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:53:00.800421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:53:00.800432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:53:00.800443 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:53:00.800454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:53:00.800465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:53:00.800475 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:53:00.800486 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:53:00.800497 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:53:00.800508 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:53:00.800522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:53:00.800542 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:53:00.800560 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:53:00.800578 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:53:00.800598 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:53:00.800618 systemd[1]: Reached target machines.target - Containers. Jan 20 00:53:00.800638 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:53:00.800660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:53:00.800682 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:53:00.800701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:53:00.800713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:53:00.800723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:53:00.800734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:53:00.800745 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:53:00.800756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:53:00.800767 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:53:00.800778 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:53:00.800791 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:53:00.800802 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:53:00.800812 kernel: loop: module loaded Jan 20 00:53:00.800823 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:53:00.800833 kernel: fuse: init (API version 7.39) Jan 20 00:53:00.800844 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:53:00.800854 kernel: ACPI: bus type drm_connector registered Jan 20 00:53:00.800865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:53:00.800876 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:53:00.800909 systemd-journald[1142]: Collecting audit messages is disabled. Jan 20 00:53:00.800930 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:53:00.800988 systemd-journald[1142]: Journal started Jan 20 00:53:00.801012 systemd-journald[1142]: Runtime Journal (/run/log/journal/dcf0116742b043d2a2bedd9fef431969) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:53:00.368695 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:53:00.391575 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:53:00.392218 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:53:00.392597 systemd[1]: systemd-journald.service: Consumed 1.316s CPU time. Jan 20 00:53:00.808451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:53:00.814052 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:53:00.814081 systemd[1]: Stopped verity-setup.service. Jan 20 00:53:00.821428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:53:00.825398 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:53:00.829116 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:53:00.831983 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:53:00.835121 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:53:00.837767 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:53:00.840669 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:53:00.843646 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:53:00.846507 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:53:00.849885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:53:00.853406 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:53:00.853617 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:53:00.856857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:53:00.857111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:53:00.860250 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:53:00.860489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:53:00.863426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:53:00.863619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:53:00.866924 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:53:00.867162 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:53:00.870177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:53:00.870574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:53:00.873750 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:53:00.876934 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:53:00.880435 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:53:00.894560 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:53:00.905489 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:53:00.909745 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:53:00.912683 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:53:00.912733 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:53:00.916456 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:53:00.920987 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:53:00.925159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:53:00.929001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:53:00.930343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:53:00.934636 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:53:00.938642 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:53:00.941175 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:53:00.945324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:53:00.954198 systemd-journald[1142]: Time spent on flushing to /var/log/journal/dcf0116742b043d2a2bedd9fef431969 is 11.573ms for 940 entries. Jan 20 00:53:00.954198 systemd-journald[1142]: System Journal (/var/log/journal/dcf0116742b043d2a2bedd9fef431969) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:53:00.982745 systemd-journald[1142]: Received client request to flush runtime journal. Jan 20 00:53:00.954555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:53:00.961562 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:53:00.969035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:53:00.977467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:53:00.981317 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:53:00.987234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:53:00.991588 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:53:00.995264 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:53:00.998771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:53:01.003587 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:53:01.007493 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 20 00:53:01.007509 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 20 00:53:01.009282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:53:01.014863 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:53:01.025554 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:53:01.032551 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:53:01.036441 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:53:01.046414 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:53:01.047741 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:53:01.053311 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 20 00:53:01.056749 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:53:01.057482 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:53:01.082648 kernel: loop1: detected capacity change from 0 to 229808 Jan 20 00:53:01.087751 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:53:01.100154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:53:01.128393 kernel: loop2: detected capacity change from 0 to 142488 Jan 20 00:53:01.136833 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 20 00:53:01.136857 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 20 00:53:01.144932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:53:01.177482 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:53:01.196410 kernel: loop4: detected capacity change from 0 to 229808 Jan 20 00:53:01.210422 kernel: loop5: detected capacity change from 0 to 142488 Jan 20 00:53:01.222699 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:53:01.224405 (sd-merge)[1199]: Merged extensions into '/usr'. Jan 20 00:53:01.228998 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:53:01.229040 systemd[1]: Reloading... Jan 20 00:53:01.288409 zram_generator::config[1221]: No configuration found. Jan 20 00:53:01.321777 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:53:01.412278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:53:01.462745 systemd[1]: Reloading finished in 232 ms. Jan 20 00:53:01.497243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:53:01.502311 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:53:01.507271 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:53:01.530594 systemd[1]: Starting ensure-sysext.service... Jan 20 00:53:01.534728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:53:01.539247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:53:01.544889 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:53:01.544908 systemd[1]: Reloading... Jan 20 00:53:01.558607 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:53:01.559217 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:53:01.560255 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:53:01.560657 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 20 00:53:01.560814 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 20 00:53:01.564886 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:53:01.564911 systemd-tmpfiles[1265]: Skipping /boot Jan 20 00:53:01.571087 systemd-udevd[1266]: Using default interface naming scheme 'v255'. Jan 20 00:53:01.578334 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:53:01.578350 systemd-tmpfiles[1265]: Skipping /boot Jan 20 00:53:01.607438 zram_generator::config[1292]: No configuration found. Jan 20 00:53:01.660522 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1322) Jan 20 00:53:01.719975 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:53:01.727423 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:53:01.734171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:53:01.736479 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:53:01.736742 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:53:01.741160 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:53:01.767537 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:53:01.797702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:53:01.801458 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:53:01.801789 systemd[1]: Reloading finished in 256 ms. Jan 20 00:53:01.818297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:53:01.822074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:53:01.884706 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:53:01.905405 kernel: kvm_amd: TSC scaling supported Jan 20 00:53:01.905459 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:53:01.905485 kernel: kvm_amd: Nested Paging enabled Jan 20 00:53:01.907596 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:53:01.907619 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:53:01.907675 systemd[1]: Finished ensure-sysext.service. Jan 20 00:53:01.952398 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:53:01.961662 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:53:01.962424 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:53:01.967613 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:53:01.970855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:53:01.972140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:53:01.980086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:53:01.987453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:53:01.993146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:53:01.997538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:53:01.998637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:53:02.003415 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:53:02.012638 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:53:02.022646 augenrules[1387]: No rules Jan 20 00:53:02.019633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:53:02.031236 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:53:02.036161 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:53:02.040848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:53:02.044540 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:53:02.045877 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:53:02.050290 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:53:02.053928 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:53:02.057548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:53:02.057737 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:53:02.061077 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:53:02.061271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:53:02.064977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:53:02.065174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:53:02.069057 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:53:02.069250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:53:02.072991 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:53:02.077146 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:53:02.096578 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:53:02.099793 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:53:02.099860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:53:02.101155 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:53:02.106877 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:53:02.107010 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:53:02.107511 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:53:02.112472 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:53:02.213082 systemd-networkd[1384]: lo: Link UP Jan 20 00:53:02.213093 systemd-networkd[1384]: lo: Gained carrier Jan 20 00:53:02.215224 systemd-networkd[1384]: Enumeration completed Jan 20 00:53:02.215970 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:53:02.215975 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:53:02.216641 systemd-resolved[1388]: Positive Trust Anchors: Jan 20 00:53:02.216650 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:53:02.216676 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:53:02.216841 systemd-networkd[1384]: eth0: Link UP Jan 20 00:53:02.216846 systemd-networkd[1384]: eth0: Gained carrier Jan 20 00:53:02.216857 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:53:02.220307 systemd-resolved[1388]: Defaulting to hostname 'linux'. Jan 20 00:53:02.233515 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:53:02.233867 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:53:02.234299 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:53:02.235027 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:53:02.235733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:53:02.236137 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:53:02.238672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:53:02.238814 systemd[1]: Reached target network.target - Network. Jan 20 00:53:02.239220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:53:02.240120 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:53:02.261462 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:53:02.262253 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Jan 20 00:53:03.182018 systemd-resolved[1388]: Clock change detected. Flushing caches. Jan 20 00:53:03.182054 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:53:03.182140 systemd-timesyncd[1393]: Initial clock synchronization to Tue 2026-01-20 00:53:03.181956 UTC. Jan 20 00:53:03.187436 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:53:03.191752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:53:03.194568 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:53:03.198632 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:53:03.201644 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:53:03.204814 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:53:03.207996 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:53:03.211316 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:53:03.214150 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:53:03.217377 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:53:03.220468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:53:03.220517 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:53:03.222851 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:53:03.225785 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:53:03.230044 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:53:03.241484 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:53:03.245489 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:53:03.248851 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:53:03.252362 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:53:03.254791 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:53:03.257231 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:53:03.257278 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:53:03.258599 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:53:03.262379 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:53:03.266281 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:53:03.269861 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:53:03.272479 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:53:03.276238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:53:03.281212 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:53:03.285717 jq[1433]: false Jan 20 00:53:03.291559 dbus-daemon[1432]: [system] SELinux support is enabled Jan 20 00:53:03.290297 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:53:03.298519 extend-filesystems[1434]: Found loop3 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found loop4 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found loop5 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found sr0 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda1 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda2 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda3 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found usr Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda4 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda6 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda7 Jan 20 00:53:03.298519 extend-filesystems[1434]: Found vda9 Jan 20 00:53:03.298519 extend-filesystems[1434]: Checking size of /dev/vda9 Jan 20 00:53:03.388190 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:53:03.388217 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1314) Jan 20 00:53:03.388231 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:53:03.295605 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:53:03.388417 extend-filesystems[1434]: Resized partition /dev/vda9 Jan 20 00:53:03.302310 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:53:03.389007 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:53:03.389007 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:53:03.389007 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:53:03.389007 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:53:03.305287 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:53:03.392114 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jan 20 00:53:03.306573 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:53:03.308560 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:53:03.392617 update_engine[1447]: I20260120 00:53:03.342388 1447 main.cc:92] Flatcar Update Engine starting Jan 20 00:53:03.392617 update_engine[1447]: I20260120 00:53:03.347755 1447 update_check_scheduler.cc:74] Next update check in 10m35s Jan 20 00:53:03.316315 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:53:03.392984 jq[1449]: true Jan 20 00:53:03.333263 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:53:03.354190 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:53:03.354415 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:53:03.354788 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:53:03.355027 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:53:03.362787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:53:03.363055 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:53:03.374311 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:53:03.374526 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:53:03.383179 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:53:03.383201 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:53:03.386282 systemd-logind[1444]: New seat seat0. Jan 20 00:53:03.409747 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:53:03.410501 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:53:03.418067 tar[1458]: linux-amd64/LICENSE Jan 20 00:53:03.418067 tar[1458]: linux-amd64/helm Jan 20 00:53:03.418355 jq[1460]: true Jan 20 00:53:03.433028 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:53:03.436714 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:53:03.439189 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:53:03.442513 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:53:03.442613 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:53:03.457617 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:53:03.500251 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:53:03.505810 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:53:03.510344 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:53:03.527354 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:53:03.537211 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:53:03.687433 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:53:03.705331 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:53:03.716871 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:53:03.717157 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:53:03.722010 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:53:03.854058 kernel: hrtimer: interrupt took 6411863 ns Jan 20 00:53:03.927972 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:53:03.940814 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:53:03.946774 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:53:03.952347 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:53:04.106117 containerd[1461]: time="2026-01-20T00:53:04.104291579Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:53:04.135770 containerd[1461]: time="2026-01-20T00:53:04.135703528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:53:04.139164 containerd[1461]: time="2026-01-20T00:53:04.139133260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:53:04.139247 containerd[1461]: time="2026-01-20T00:53:04.139233748Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:53:04.139296 containerd[1461]: time="2026-01-20T00:53:04.139285204Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:53:04.139575 containerd[1461]: time="2026-01-20T00:53:04.139557652Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:53:04.139680 containerd[1461]: time="2026-01-20T00:53:04.139642181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:53:04.139810 containerd[1461]: time="2026-01-20T00:53:04.139792922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:53:04.139857 containerd[1461]: time="2026-01-20T00:53:04.139845901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:53:04.140148 containerd[1461]: time="2026-01-20T00:53:04.140128128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:53:04.140213 containerd[1461]: time="2026-01-20T00:53:04.140199962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:53:04.140259 containerd[1461]: time="2026-01-20T00:53:04.140246569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:53:04.140303 containerd[1461]: time="2026-01-20T00:53:04.140292064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:53:04.140433 containerd[1461]: time="2026-01-20T00:53:04.140417959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:53:04.140910 containerd[1461]: time="2026-01-20T00:53:04.140891703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:53:04.141147 containerd[1461]: time="2026-01-20T00:53:04.141129157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:53:04.141198 containerd[1461]: time="2026-01-20T00:53:04.141187856Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:53:04.141337 containerd[1461]: time="2026-01-20T00:53:04.141322088Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:53:04.141440 containerd[1461]: time="2026-01-20T00:53:04.141426442Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:53:04.147199 containerd[1461]: time="2026-01-20T00:53:04.147058241Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:53:04.147237 containerd[1461]: time="2026-01-20T00:53:04.147198103Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:53:04.147237 containerd[1461]: time="2026-01-20T00:53:04.147214984Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:53:04.147237 containerd[1461]: time="2026-01-20T00:53:04.147228359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:53:04.147284 containerd[1461]: time="2026-01-20T00:53:04.147241113Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:53:04.147578 containerd[1461]: time="2026-01-20T00:53:04.147520865Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:53:04.149199 containerd[1461]: time="2026-01-20T00:53:04.148961926Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:53:04.149760 containerd[1461]: time="2026-01-20T00:53:04.149654249Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:53:04.149760 containerd[1461]: time="2026-01-20T00:53:04.149748295Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:53:04.149814 containerd[1461]: time="2026-01-20T00:53:04.149764955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:53:04.149814 containerd[1461]: time="2026-01-20T00:53:04.149782328Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.149814 containerd[1461]: time="2026-01-20T00:53:04.149796635Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.149814 containerd[1461]: time="2026-01-20T00:53:04.149809879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.149928 containerd[1461]: time="2026-01-20T00:53:04.149888136Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.149952 containerd[1461]: time="2026-01-20T00:53:04.149930836Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.149952 containerd[1461]: time="2026-01-20T00:53:04.149948188Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.149985 containerd[1461]: time="2026-01-20T00:53:04.149962976Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.149985 containerd[1461]: time="2026-01-20T00:53:04.149977212Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:53:04.150017 containerd[1461]: time="2026-01-20T00:53:04.149998041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150017 containerd[1461]: time="2026-01-20T00:53:04.150014241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150312 containerd[1461]: time="2026-01-20T00:53:04.150025923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150312 containerd[1461]: time="2026-01-20T00:53:04.150279636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150312 containerd[1461]: time="2026-01-20T00:53:04.150298372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150372 containerd[1461]: time="2026-01-20T00:53:04.150313249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150372 containerd[1461]: time="2026-01-20T00:53:04.150326835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150372 containerd[1461]: time="2026-01-20T00:53:04.150341683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150372 containerd[1461]: time="2026-01-20T00:53:04.150354026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150372 containerd[1461]: time="2026-01-20T00:53:04.150370426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150451 containerd[1461]: time="2026-01-20T00:53:04.150384632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150451 containerd[1461]: time="2026-01-20T00:53:04.150397968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150451 containerd[1461]: time="2026-01-20T00:53:04.150411533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150451 containerd[1461]: time="2026-01-20T00:53:04.150428394Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:53:04.150593 containerd[1461]: time="2026-01-20T00:53:04.150549631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150593 containerd[1461]: time="2026-01-20T00:53:04.150590046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.150629 containerd[1461]: time="2026-01-20T00:53:04.150603702Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:53:04.150950 containerd[1461]: time="2026-01-20T00:53:04.150801051Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:53:04.150995 containerd[1461]: time="2026-01-20T00:53:04.150956912Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:53:04.150995 containerd[1461]: time="2026-01-20T00:53:04.150975677Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:53:04.151044 containerd[1461]: time="2026-01-20T00:53:04.150992618Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:53:04.151044 containerd[1461]: time="2026-01-20T00:53:04.151006003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.151044 containerd[1461]: time="2026-01-20T00:53:04.151024768Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:53:04.151204 containerd[1461]: time="2026-01-20T00:53:04.151048824Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:53:04.151204 containerd[1461]: time="2026-01-20T00:53:04.151060755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:53:04.152374 containerd[1461]: time="2026-01-20T00:53:04.152271807Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:53:04.152374 containerd[1461]: time="2026-01-20T00:53:04.152353229Z" level=info msg="Connect containerd service" Jan 20 00:53:04.152572 containerd[1461]: time="2026-01-20T00:53:04.152468684Z" level=info msg="using legacy CRI server" Jan 20 00:53:04.152572 containerd[1461]: time="2026-01-20T00:53:04.152478593Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:53:04.153004 containerd[1461]: time="2026-01-20T00:53:04.152932911Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:53:04.154027 containerd[1461]: time="2026-01-20T00:53:04.153974336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:53:04.154952 containerd[1461]: time="2026-01-20T00:53:04.154324300Z" level=info msg="Start subscribing containerd event" Jan 20 00:53:04.154952 containerd[1461]: time="2026-01-20T00:53:04.154431270Z" level=info msg="Start recovering state" Jan 20 00:53:04.154952 containerd[1461]: time="2026-01-20T00:53:04.154547336Z" level=info msg="Start event monitor" Jan 20 00:53:04.154952 containerd[1461]: time="2026-01-20T00:53:04.154561162Z" level=info msg="Start snapshots syncer" Jan 20 00:53:04.154952 containerd[1461]: time="2026-01-20T00:53:04.154571551Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:53:04.154952 containerd[1461]: time="2026-01-20T00:53:04.154579045Z" level=info msg="Start streaming server" Jan 20 00:53:04.155142 containerd[1461]: time="2026-01-20T00:53:04.155111138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:53:04.155228 containerd[1461]: time="2026-01-20T00:53:04.155208741Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:53:04.155471 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:53:04.158813 containerd[1461]: time="2026-01-20T00:53:04.158504335Z" level=info msg="containerd successfully booted in 0.056479s" Jan 20 00:53:04.184592 tar[1458]: linux-amd64/README.md Jan 20 00:53:04.200815 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:53:04.740312 systemd-networkd[1384]: eth0: Gained IPv6LL Jan 20 00:53:04.763317 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:53:04.767003 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:53:04.781367 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:53:04.785493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:53:04.789264 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:53:04.814239 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:53:04.814482 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:53:04.818273 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:53:04.819430 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:53:06.625630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:06.629149 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:53:06.632261 systemd[1]: Startup finished in 1.246s (kernel) + 6.046s (initrd) + 5.987s (userspace) = 13.280s. Jan 20 00:53:06.632752 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:53:07.692627 kubelet[1543]: E0120 00:53:07.692529 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:53:07.696643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:53:07.696880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:53:07.697268 systemd[1]: kubelet.service: Consumed 2.756s CPU time. Jan 20 00:53:07.933292 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:53:07.934794 systemd[1]: Started sshd@0-10.0.0.160:22-10.0.0.1:47416.service - OpenSSH per-connection server daemon (10.0.0.1:47416). Jan 20 00:53:07.987805 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 47416 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:53:07.990167 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:53:07.999993 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:53:08.009340 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:53:08.011753 systemd-logind[1444]: New session 1 of user core. Jan 20 00:53:08.025868 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:53:08.028888 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:53:08.037736 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:53:08.239518 systemd[1560]: Queued start job for default target default.target. Jan 20 00:53:08.248871 systemd[1560]: Created slice app.slice - User Application Slice. Jan 20 00:53:08.248957 systemd[1560]: Reached target paths.target - Paths. Jan 20 00:53:08.248971 systemd[1560]: Reached target timers.target - Timers. Jan 20 00:53:08.250929 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:53:08.264330 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:53:08.264475 systemd[1560]: Reached target sockets.target - Sockets. Jan 20 00:53:08.264510 systemd[1560]: Reached target basic.target - Basic System. Jan 20 00:53:08.264551 systemd[1560]: Reached target default.target - Main User Target. Jan 20 00:53:08.264589 systemd[1560]: Startup finished in 219ms. Jan 20 00:53:08.264944 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:53:08.267013 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:53:08.335610 systemd[1]: Started sshd@1-10.0.0.160:22-10.0.0.1:47432.service - OpenSSH per-connection server daemon (10.0.0.1:47432). Jan 20 00:53:08.366726 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 47432 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:53:08.368372 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:53:08.373384 systemd-logind[1444]: New session 2 of user core. Jan 20 00:53:08.383247 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:53:08.440043 sshd[1571]: pam_unix(sshd:session): session closed for user core Jan 20 00:53:08.450325 systemd[1]: sshd@1-10.0.0.160:22-10.0.0.1:47432.service: Deactivated successfully. Jan 20 00:53:08.452439 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:53:08.454070 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:53:08.462482 systemd[1]: Started sshd@2-10.0.0.160:22-10.0.0.1:47442.service - OpenSSH per-connection server daemon (10.0.0.1:47442). Jan 20 00:53:08.463565 systemd-logind[1444]: Removed session 2. Jan 20 00:53:08.489978 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 47442 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:53:08.491852 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:53:08.496689 systemd-logind[1444]: New session 3 of user core. Jan 20 00:53:08.506232 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:53:08.582935 sshd[1578]: pam_unix(sshd:session): session closed for user core Jan 20 00:53:08.607067 systemd[1]: sshd@2-10.0.0.160:22-10.0.0.1:47442.service: Deactivated successfully. Jan 20 00:53:08.614785 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:53:08.616030 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:53:08.633948 systemd[1]: Started sshd@3-10.0.0.160:22-10.0.0.1:47458.service - OpenSSH per-connection server daemon (10.0.0.1:47458). Jan 20 00:53:08.639736 systemd-logind[1444]: Removed session 3. Jan 20 00:53:08.684615 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 47458 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:53:08.687281 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:53:08.697235 systemd-logind[1444]: New session 4 of user core. Jan 20 00:53:08.707492 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:53:08.811116 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 20 00:53:08.820789 systemd[1]: sshd@3-10.0.0.160:22-10.0.0.1:47458.service: Deactivated successfully. Jan 20 00:53:08.822468 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:53:08.823815 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:53:08.825186 systemd[1]: Started sshd@4-10.0.0.160:22-10.0.0.1:47466.service - OpenSSH per-connection server daemon (10.0.0.1:47466). Jan 20 00:53:08.826003 systemd-logind[1444]: Removed session 4. Jan 20 00:53:08.875527 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 47466 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:53:08.877196 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:53:08.881880 systemd-logind[1444]: New session 5 of user core. Jan 20 00:53:08.891264 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:53:08.953996 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:53:08.954457 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:53:08.974997 sudo[1595]: pam_unix(sudo:session): session closed for user root Jan 20 00:53:08.977149 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 20 00:53:08.986781 systemd[1]: sshd@4-10.0.0.160:22-10.0.0.1:47466.service: Deactivated successfully. Jan 20 00:53:08.988544 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:53:08.990447 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:53:08.991864 systemd[1]: Started sshd@5-10.0.0.160:22-10.0.0.1:47482.service - OpenSSH per-connection server daemon (10.0.0.1:47482). Jan 20 00:53:08.992623 systemd-logind[1444]: Removed session 5. Jan 20 00:53:09.054293 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 47482 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:53:09.059943 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:53:09.065892 systemd-logind[1444]: New session 6 of user core. Jan 20 00:53:09.089628 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:53:09.151482 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:53:09.151927 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:53:09.157834 sudo[1604]: pam_unix(sudo:session): session closed for user root Jan 20 00:53:09.169824 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:53:09.170447 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:53:09.207501 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:53:09.209697 auditctl[1607]: No rules Jan 20 00:53:09.210307 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:53:09.210564 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:53:09.213881 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:53:09.279544 augenrules[1625]: No rules Jan 20 00:53:09.281425 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:53:09.282610 sudo[1603]: pam_unix(sudo:session): session closed for user root Jan 20 00:53:09.284773 sshd[1600]: pam_unix(sshd:session): session closed for user core Jan 20 00:53:09.368579 systemd[1]: sshd@5-10.0.0.160:22-10.0.0.1:47482.service: Deactivated successfully. Jan 20 00:53:09.370709 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:53:09.372348 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:53:09.383512 systemd[1]: Started sshd@6-10.0.0.160:22-10.0.0.1:47486.service - OpenSSH per-connection server daemon (10.0.0.1:47486). Jan 20 00:53:09.384700 systemd-logind[1444]: Removed session 6. Jan 20 00:53:09.411527 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 47486 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:53:09.413497 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:53:09.418652 systemd-logind[1444]: New session 7 of user core. Jan 20 00:53:09.433291 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:53:09.493423 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:53:09.493966 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:53:10.643370 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:53:10.644931 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:53:12.074728 dockerd[1655]: time="2026-01-20T00:53:12.074493003Z" level=info msg="Starting up" Jan 20 00:53:12.361044 dockerd[1655]: time="2026-01-20T00:53:12.360870503Z" level=info msg="Loading containers: start." Jan 20 00:53:12.823145 kernel: Initializing XFRM netlink socket Jan 20 00:53:12.924061 systemd-networkd[1384]: docker0: Link UP Jan 20 00:53:12.950140 dockerd[1655]: time="2026-01-20T00:53:12.949878828Z" level=info msg="Loading containers: done." Jan 20 00:53:12.994834 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck914769768-merged.mount: Deactivated successfully. Jan 20 00:53:12.996757 dockerd[1655]: time="2026-01-20T00:53:12.996637317Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:53:12.996931 dockerd[1655]: time="2026-01-20T00:53:12.996892153Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:53:12.997158 dockerd[1655]: time="2026-01-20T00:53:12.997121251Z" level=info msg="Daemon has completed initialization" Jan 20 00:53:13.046610 dockerd[1655]: time="2026-01-20T00:53:13.046433207Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:53:13.046669 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:53:14.498207 containerd[1461]: time="2026-01-20T00:53:14.498057589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 00:53:15.054301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39675558.mount: Deactivated successfully. Jan 20 00:53:16.870983 containerd[1461]: time="2026-01-20T00:53:16.870881499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:16.871596 containerd[1461]: time="2026-01-20T00:53:16.871525615Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 20 00:53:16.872969 containerd[1461]: time="2026-01-20T00:53:16.872915620Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:16.876434 containerd[1461]: time="2026-01-20T00:53:16.876389269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:16.877883 containerd[1461]: time="2026-01-20T00:53:16.877789231Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.379589527s" Jan 20 00:53:16.877931 containerd[1461]: time="2026-01-20T00:53:16.877902452Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 00:53:16.880889 containerd[1461]: time="2026-01-20T00:53:16.880857370Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 00:53:17.943174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:53:17.952322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:53:18.211881 containerd[1461]: time="2026-01-20T00:53:18.211674530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:18.213068 containerd[1461]: time="2026-01-20T00:53:18.212881224Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 20 00:53:18.213967 containerd[1461]: time="2026-01-20T00:53:18.213927647Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:18.220229 containerd[1461]: time="2026-01-20T00:53:18.220178170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:18.222402 containerd[1461]: time="2026-01-20T00:53:18.222313003Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.341406712s" Jan 20 00:53:18.222449 containerd[1461]: time="2026-01-20T00:53:18.222407510Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 00:53:18.223070 containerd[1461]: time="2026-01-20T00:53:18.223030037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 00:53:18.432500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:18.437560 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:53:18.504809 kubelet[1875]: E0120 00:53:18.504671 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:53:18.510222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:53:18.510431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:53:19.385669 containerd[1461]: time="2026-01-20T00:53:19.385587928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:19.386619 containerd[1461]: time="2026-01-20T00:53:19.386560404Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 20 00:53:19.387789 containerd[1461]: time="2026-01-20T00:53:19.387742887Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:19.390631 containerd[1461]: time="2026-01-20T00:53:19.390575317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:19.393021 containerd[1461]: time="2026-01-20T00:53:19.392973840Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.169571077s" Jan 20 00:53:19.393069 containerd[1461]: time="2026-01-20T00:53:19.393029865Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 00:53:19.394126 containerd[1461]: time="2026-01-20T00:53:19.394042175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 00:53:20.256863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165099322.mount: Deactivated successfully. Jan 20 00:53:20.658431 containerd[1461]: time="2026-01-20T00:53:20.658267475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:20.659182 containerd[1461]: time="2026-01-20T00:53:20.659135866Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 00:53:20.660414 containerd[1461]: time="2026-01-20T00:53:20.660364210Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:20.663189 containerd[1461]: time="2026-01-20T00:53:20.663149617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:20.664329 containerd[1461]: time="2026-01-20T00:53:20.664274201Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.270140274s" Jan 20 00:53:20.664371 containerd[1461]: time="2026-01-20T00:53:20.664329114Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 00:53:20.665046 containerd[1461]: time="2026-01-20T00:53:20.665016678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 00:53:21.111495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180927407.mount: Deactivated successfully. Jan 20 00:53:22.060825 containerd[1461]: time="2026-01-20T00:53:22.060739386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:22.061610 containerd[1461]: time="2026-01-20T00:53:22.061563661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 20 00:53:22.062968 containerd[1461]: time="2026-01-20T00:53:22.062921751Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:22.066572 containerd[1461]: time="2026-01-20T00:53:22.066523046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:22.067857 containerd[1461]: time="2026-01-20T00:53:22.067813886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.402760841s" Jan 20 00:53:22.067857 containerd[1461]: time="2026-01-20T00:53:22.067852639Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 00:53:22.068668 containerd[1461]: time="2026-01-20T00:53:22.068622997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:53:22.437567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022645531.mount: Deactivated successfully. Jan 20 00:53:22.444626 containerd[1461]: time="2026-01-20T00:53:22.444548527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:22.445461 containerd[1461]: time="2026-01-20T00:53:22.445383266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:53:22.446294 containerd[1461]: time="2026-01-20T00:53:22.446236062Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:22.448811 containerd[1461]: time="2026-01-20T00:53:22.448756439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:22.449459 containerd[1461]: time="2026-01-20T00:53:22.449403593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 380.729169ms" Jan 20 00:53:22.449459 containerd[1461]: time="2026-01-20T00:53:22.449450860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:53:22.450205 containerd[1461]: time="2026-01-20T00:53:22.450176135Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 00:53:22.873030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919096876.mount: Deactivated successfully. Jan 20 00:53:24.896190 containerd[1461]: time="2026-01-20T00:53:24.896058502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:24.897148 containerd[1461]: time="2026-01-20T00:53:24.897061944Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 20 00:53:24.898436 containerd[1461]: time="2026-01-20T00:53:24.898379136Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:24.901747 containerd[1461]: time="2026-01-20T00:53:24.901675497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:24.902839 containerd[1461]: time="2026-01-20T00:53:24.902795388Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.452582225s" Jan 20 00:53:24.902839 containerd[1461]: time="2026-01-20T00:53:24.902832628Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 00:53:28.692995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:53:28.701295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:53:28.854812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:28.860295 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:53:28.896810 kubelet[2038]: E0120 00:53:28.896694 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:53:28.900967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:53:28.901330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:53:28.975475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:28.992340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:53:29.017956 systemd[1]: Reloading requested from client PID 2054 ('systemctl') (unit session-7.scope)... Jan 20 00:53:29.017981 systemd[1]: Reloading... Jan 20 00:53:29.095454 zram_generator::config[2093]: No configuration found. Jan 20 00:53:29.246235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:53:29.329296 systemd[1]: Reloading finished in 310 ms. Jan 20 00:53:29.389424 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:53:29.389517 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:53:29.389839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:29.392426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:53:29.556988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:29.561740 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:53:29.602866 kubelet[2142]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:53:29.602866 kubelet[2142]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:53:29.602866 kubelet[2142]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:53:29.603230 kubelet[2142]: I0120 00:53:29.602896 2142 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:53:29.976815 kubelet[2142]: I0120 00:53:29.976760 2142 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:53:29.976915 kubelet[2142]: I0120 00:53:29.976821 2142 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:53:29.977275 kubelet[2142]: I0120 00:53:29.977240 2142 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:53:29.997775 kubelet[2142]: E0120 00:53:29.997665 2142 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:53:29.999839 kubelet[2142]: I0120 00:53:29.999782 2142 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:53:30.008001 kubelet[2142]: E0120 00:53:30.007927 2142 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:53:30.008001 kubelet[2142]: I0120 00:53:30.007965 2142 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:53:30.014157 kubelet[2142]: I0120 00:53:30.014127 2142 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:53:30.014460 kubelet[2142]: I0120 00:53:30.014404 2142 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:53:30.014667 kubelet[2142]: I0120 00:53:30.014438 2142 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:53:30.014902 kubelet[2142]: I0120 00:53:30.014697 2142 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:53:30.014902 kubelet[2142]: I0120 00:53:30.014739 2142 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:53:30.015572 kubelet[2142]: I0120 00:53:30.015518 2142 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:53:30.017508 kubelet[2142]: I0120 00:53:30.017440 2142 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:53:30.017508 kubelet[2142]: I0120 00:53:30.017491 2142 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:53:30.017603 kubelet[2142]: I0120 00:53:30.017583 2142 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:53:30.017697 kubelet[2142]: I0120 00:53:30.017658 2142 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:53:30.024424 kubelet[2142]: E0120 00:53:30.024263 2142 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:53:30.024424 kubelet[2142]: E0120 00:53:30.024277 2142 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:53:30.026609 kubelet[2142]: I0120 00:53:30.026578 2142 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:53:30.027664 kubelet[2142]: I0120 00:53:30.027620 2142 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:53:30.029503 kubelet[2142]: W0120 00:53:30.029469 2142 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:53:30.033421 kubelet[2142]: I0120 00:53:30.033375 2142 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:53:30.033472 kubelet[2142]: I0120 00:53:30.033461 2142 server.go:1289] "Started kubelet" Jan 20 00:53:30.034292 kubelet[2142]: I0120 00:53:30.034203 2142 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:53:30.036137 kubelet[2142]: I0120 00:53:30.034379 2142 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:53:30.036137 kubelet[2142]: I0120 00:53:30.034856 2142 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:53:30.036137 kubelet[2142]: I0120 00:53:30.035069 2142 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:53:30.036416 kubelet[2142]: I0120 00:53:30.036396 2142 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:53:30.038335 kubelet[2142]: I0120 00:53:30.038319 2142 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:53:30.039818 kubelet[2142]: E0120 00:53:30.037910 2142 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.160:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4a441bbde26c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:53:30.033422956 +0000 UTC m=+0.467256191,LastTimestamp:2026-01-20 00:53:30.033422956 +0000 UTC m=+0.467256191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:53:30.040336 kubelet[2142]: E0120 00:53:30.040296 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:30.040336 kubelet[2142]: I0120 00:53:30.040321 2142 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:53:30.040336 kubelet[2142]: I0120 00:53:30.040300 2142 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:53:30.040656 kubelet[2142]: I0120 00:53:30.040610 2142 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:53:30.041035 kubelet[2142]: E0120 00:53:30.040972 2142 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:53:30.041035 kubelet[2142]: E0120 00:53:30.040964 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="200ms" Jan 20 00:53:30.041516 kubelet[2142]: I0120 00:53:30.041434 2142 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:53:30.041629 kubelet[2142]: I0120 00:53:30.041520 2142 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:53:30.042052 kubelet[2142]: E0120 00:53:30.042020 2142 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:53:30.042679 kubelet[2142]: I0120 00:53:30.042651 2142 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:53:30.060686 kubelet[2142]: I0120 00:53:30.060656 2142 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:53:30.060686 kubelet[2142]: I0120 00:53:30.060678 2142 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:53:30.060770 kubelet[2142]: I0120 00:53:30.060694 2142 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:53:30.062939 kubelet[2142]: I0120 00:53:30.062911 2142 policy_none.go:49] "None policy: Start" Jan 20 00:53:30.062985 kubelet[2142]: I0120 00:53:30.062952 2142 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:53:30.062985 kubelet[2142]: I0120 00:53:30.062966 2142 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:53:30.063276 kubelet[2142]: I0120 00:53:30.063216 2142 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:53:30.065205 kubelet[2142]: I0120 00:53:30.065004 2142 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:53:30.065205 kubelet[2142]: I0120 00:53:30.065132 2142 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:53:30.065259 kubelet[2142]: I0120 00:53:30.065167 2142 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:53:30.065310 kubelet[2142]: I0120 00:53:30.065284 2142 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:53:30.066911 kubelet[2142]: E0120 00:53:30.065640 2142 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:53:30.067332 kubelet[2142]: E0120 00:53:30.067286 2142 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:53:30.070664 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:53:30.085739 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:53:30.089435 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:53:30.098214 kubelet[2142]: E0120 00:53:30.098112 2142 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:53:30.098564 kubelet[2142]: I0120 00:53:30.098522 2142 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:53:30.098600 kubelet[2142]: I0120 00:53:30.098569 2142 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:53:30.099498 kubelet[2142]: I0120 00:53:30.099144 2142 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:53:30.100661 kubelet[2142]: E0120 00:53:30.100643 2142 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:53:30.100965 kubelet[2142]: E0120 00:53:30.100923 2142 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:53:30.178007 systemd[1]: Created slice kubepods-burstable-pod6bb1bef817db63fe8deb2a6433d1cba5.slice - libcontainer container kubepods-burstable-pod6bb1bef817db63fe8deb2a6433d1cba5.slice. Jan 20 00:53:30.195163 kubelet[2142]: E0120 00:53:30.195122 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:30.199184 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 00:53:30.201696 kubelet[2142]: I0120 00:53:30.201590 2142 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:53:30.202185 kubelet[2142]: E0120 00:53:30.202144 2142 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jan 20 00:53:30.202814 kubelet[2142]: E0120 00:53:30.202759 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:30.205818 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 00:53:30.207719 kubelet[2142]: E0120 00:53:30.207626 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:30.241972 kubelet[2142]: E0120 00:53:30.241810 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="400ms" Jan 20 00:53:30.341442 kubelet[2142]: I0120 00:53:30.341395 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bb1bef817db63fe8deb2a6433d1cba5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb1bef817db63fe8deb2a6433d1cba5\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:30.341442 kubelet[2142]: I0120 00:53:30.341445 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bb1bef817db63fe8deb2a6433d1cba5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bb1bef817db63fe8deb2a6433d1cba5\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:30.341442 kubelet[2142]: I0120 00:53:30.341468 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:30.341442 kubelet[2142]: I0120 00:53:30.341482 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:30.341669 kubelet[2142]: I0120 00:53:30.341500 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:30.341669 kubelet[2142]: I0120 00:53:30.341512 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bb1bef817db63fe8deb2a6433d1cba5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb1bef817db63fe8deb2a6433d1cba5\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:30.341669 kubelet[2142]: I0120 00:53:30.341524 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:30.341669 kubelet[2142]: I0120 00:53:30.341539 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:30.341669 kubelet[2142]: I0120 00:53:30.341553 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:30.403860 kubelet[2142]: I0120 00:53:30.403770 2142 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:53:30.404361 kubelet[2142]: E0120 00:53:30.404312 2142 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jan 20 00:53:30.496842 kubelet[2142]: E0120 00:53:30.496685 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:30.497889 containerd[1461]: time="2026-01-20T00:53:30.497839142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bb1bef817db63fe8deb2a6433d1cba5,Namespace:kube-system,Attempt:0,}" Jan 20 00:53:30.504406 kubelet[2142]: E0120 00:53:30.504366 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:30.505177 containerd[1461]: time="2026-01-20T00:53:30.505026574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 00:53:30.508735 kubelet[2142]: E0120 00:53:30.508623 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:30.509149 containerd[1461]: time="2026-01-20T00:53:30.509040529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 00:53:30.643350 kubelet[2142]: E0120 00:53:30.643292 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="800ms" Jan 20 00:53:30.806437 kubelet[2142]: I0120 00:53:30.806276 2142 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:53:30.806631 kubelet[2142]: E0120 00:53:30.806582 2142 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jan 20 00:53:30.900973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083342839.mount: Deactivated successfully. Jan 20 00:53:30.906771 containerd[1461]: time="2026-01-20T00:53:30.906674459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:53:30.907698 containerd[1461]: time="2026-01-20T00:53:30.907631660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:53:30.910841 containerd[1461]: time="2026-01-20T00:53:30.910804436Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:53:30.913240 containerd[1461]: time="2026-01-20T00:53:30.913160534Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:53:30.913565 containerd[1461]: time="2026-01-20T00:53:30.913453894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:53:30.914377 containerd[1461]: time="2026-01-20T00:53:30.914346549Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:53:30.914545 containerd[1461]: time="2026-01-20T00:53:30.914442538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:53:30.916150 containerd[1461]: time="2026-01-20T00:53:30.916066142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:53:30.918976 containerd[1461]: time="2026-01-20T00:53:30.918934628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 413.859885ms" Jan 20 00:53:30.922104 containerd[1461]: time="2026-01-20T00:53:30.920321944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.379419ms" Jan 20 00:53:30.930681 containerd[1461]: time="2026-01-20T00:53:30.925659882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 416.50515ms" Jan 20 00:53:31.016409 kubelet[2142]: E0120 00:53:31.016342 2142 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:53:31.044153 containerd[1461]: time="2026-01-20T00:53:31.042853914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:53:31.044153 containerd[1461]: time="2026-01-20T00:53:31.044109449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:53:31.044153 containerd[1461]: time="2026-01-20T00:53:31.044124477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:31.044363 containerd[1461]: time="2026-01-20T00:53:31.044202092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:31.046552 containerd[1461]: time="2026-01-20T00:53:31.046378368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:53:31.046552 containerd[1461]: time="2026-01-20T00:53:31.046418132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:53:31.046552 containerd[1461]: time="2026-01-20T00:53:31.046427901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:31.046552 containerd[1461]: time="2026-01-20T00:53:31.046497791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:31.048472 containerd[1461]: time="2026-01-20T00:53:31.048324997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:53:31.048472 containerd[1461]: time="2026-01-20T00:53:31.048392544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:53:31.048472 containerd[1461]: time="2026-01-20T00:53:31.048407502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:31.048766 containerd[1461]: time="2026-01-20T00:53:31.048546862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:31.074277 systemd[1]: Started cri-containerd-b2d5569e765ad6138510cd510017043364acc7c5f47c104d057770adf9ea1091.scope - libcontainer container b2d5569e765ad6138510cd510017043364acc7c5f47c104d057770adf9ea1091. Jan 20 00:53:31.078882 systemd[1]: Started cri-containerd-26dbc342ff328515e5042573c986974567c4537e7f71fb241f5c2ed0075d44bc.scope - libcontainer container 26dbc342ff328515e5042573c986974567c4537e7f71fb241f5c2ed0075d44bc. Jan 20 00:53:31.081599 systemd[1]: Started cri-containerd-4313e95c195da17c129aec047f90a48d0336469cd3b9d8991001e97c53105b6e.scope - libcontainer container 4313e95c195da17c129aec047f90a48d0336469cd3b9d8991001e97c53105b6e. Jan 20 00:53:31.127138 containerd[1461]: time="2026-01-20T00:53:31.124941711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4313e95c195da17c129aec047f90a48d0336469cd3b9d8991001e97c53105b6e\"" Jan 20 00:53:31.127221 kubelet[2142]: E0120 00:53:31.126879 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:31.132617 containerd[1461]: time="2026-01-20T00:53:31.132550810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"26dbc342ff328515e5042573c986974567c4537e7f71fb241f5c2ed0075d44bc\"" Jan 20 00:53:31.133346 kubelet[2142]: E0120 00:53:31.133320 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:31.134486 containerd[1461]: time="2026-01-20T00:53:31.134447912Z" level=info msg="CreateContainer within sandbox \"4313e95c195da17c129aec047f90a48d0336469cd3b9d8991001e97c53105b6e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:53:31.137862 containerd[1461]: time="2026-01-20T00:53:31.137454759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bb1bef817db63fe8deb2a6433d1cba5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2d5569e765ad6138510cd510017043364acc7c5f47c104d057770adf9ea1091\"" Jan 20 00:53:31.137862 containerd[1461]: time="2026-01-20T00:53:31.137747014Z" level=info msg="CreateContainer within sandbox \"26dbc342ff328515e5042573c986974567c4537e7f71fb241f5c2ed0075d44bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:53:31.138267 kubelet[2142]: E0120 00:53:31.138228 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:31.143554 containerd[1461]: time="2026-01-20T00:53:31.143471964Z" level=info msg="CreateContainer within sandbox \"b2d5569e765ad6138510cd510017043364acc7c5f47c104d057770adf9ea1091\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:53:31.157948 containerd[1461]: time="2026-01-20T00:53:31.157895096Z" level=info msg="CreateContainer within sandbox \"4313e95c195da17c129aec047f90a48d0336469cd3b9d8991001e97c53105b6e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93078dfb646f95be1985268b1b1a2c2729190a5fcd40f7ea84d9dd37d9501608\"" Jan 20 00:53:31.158739 containerd[1461]: time="2026-01-20T00:53:31.158485459Z" level=info msg="StartContainer for \"93078dfb646f95be1985268b1b1a2c2729190a5fcd40f7ea84d9dd37d9501608\"" Jan 20 00:53:31.162677 containerd[1461]: time="2026-01-20T00:53:31.162641190Z" level=info msg="CreateContainer within sandbox \"26dbc342ff328515e5042573c986974567c4537e7f71fb241f5c2ed0075d44bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33b419811982b0eb347ba6b19b7a997a2a9ad48714b483bcb062b2981b422d34\"" Jan 20 00:53:31.164296 containerd[1461]: time="2026-01-20T00:53:31.164273471Z" level=info msg="StartContainer for \"33b419811982b0eb347ba6b19b7a997a2a9ad48714b483bcb062b2981b422d34\"" Jan 20 00:53:31.168027 containerd[1461]: time="2026-01-20T00:53:31.167567235Z" level=info msg="CreateContainer within sandbox \"b2d5569e765ad6138510cd510017043364acc7c5f47c104d057770adf9ea1091\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c0da4dddb41f051b48f32f90dabe0e82b002953136757cac1848e2a65fd79356\"" Jan 20 00:53:31.168198 containerd[1461]: time="2026-01-20T00:53:31.168167679Z" level=info msg="StartContainer for \"c0da4dddb41f051b48f32f90dabe0e82b002953136757cac1848e2a65fd79356\"" Jan 20 00:53:31.189328 systemd[1]: Started cri-containerd-93078dfb646f95be1985268b1b1a2c2729190a5fcd40f7ea84d9dd37d9501608.scope - libcontainer container 93078dfb646f95be1985268b1b1a2c2729190a5fcd40f7ea84d9dd37d9501608. Jan 20 00:53:31.192961 systemd[1]: Started cri-containerd-33b419811982b0eb347ba6b19b7a997a2a9ad48714b483bcb062b2981b422d34.scope - libcontainer container 33b419811982b0eb347ba6b19b7a997a2a9ad48714b483bcb062b2981b422d34. Jan 20 00:53:31.203325 systemd[1]: Started cri-containerd-c0da4dddb41f051b48f32f90dabe0e82b002953136757cac1848e2a65fd79356.scope - libcontainer container c0da4dddb41f051b48f32f90dabe0e82b002953136757cac1848e2a65fd79356. Jan 20 00:53:31.209909 kubelet[2142]: E0120 00:53:31.209855 2142 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:53:31.245027 containerd[1461]: time="2026-01-20T00:53:31.244972440Z" level=info msg="StartContainer for \"93078dfb646f95be1985268b1b1a2c2729190a5fcd40f7ea84d9dd37d9501608\" returns successfully" Jan 20 00:53:31.245316 containerd[1461]: time="2026-01-20T00:53:31.245005138Z" level=info msg="StartContainer for \"33b419811982b0eb347ba6b19b7a997a2a9ad48714b483bcb062b2981b422d34\" returns successfully" Jan 20 00:53:31.255900 containerd[1461]: time="2026-01-20T00:53:31.255773289Z" level=info msg="StartContainer for \"c0da4dddb41f051b48f32f90dabe0e82b002953136757cac1848e2a65fd79356\" returns successfully" Jan 20 00:53:31.608287 kubelet[2142]: I0120 00:53:31.608225 2142 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:53:32.079742 kubelet[2142]: E0120 00:53:32.079240 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:32.079742 kubelet[2142]: E0120 00:53:32.079395 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:32.082638 kubelet[2142]: E0120 00:53:32.082387 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:32.082638 kubelet[2142]: E0120 00:53:32.082502 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:32.086465 kubelet[2142]: E0120 00:53:32.086375 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:32.086626 kubelet[2142]: E0120 00:53:32.086556 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:32.549660 kubelet[2142]: E0120 00:53:32.549584 2142 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:53:32.831443 kubelet[2142]: I0120 00:53:32.830733 2142 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:53:32.831443 kubelet[2142]: E0120 00:53:32.830806 2142 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 00:53:32.846698 kubelet[2142]: E0120 00:53:32.846654 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:32.947569 kubelet[2142]: E0120 00:53:32.947474 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.048142 kubelet[2142]: E0120 00:53:33.048013 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.088455 kubelet[2142]: E0120 00:53:33.088202 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:33.088455 kubelet[2142]: E0120 00:53:33.088324 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:33.088842 kubelet[2142]: E0120 00:53:33.088544 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:33.088842 kubelet[2142]: E0120 00:53:33.088690 2142 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:53:33.088842 kubelet[2142]: E0120 00:53:33.088783 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:33.088898 kubelet[2142]: E0120 00:53:33.088863 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:33.149229 kubelet[2142]: E0120 00:53:33.149178 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.249964 kubelet[2142]: E0120 00:53:33.249886 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.351069 kubelet[2142]: E0120 00:53:33.350937 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.451888 kubelet[2142]: E0120 00:53:33.451827 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.552626 kubelet[2142]: E0120 00:53:33.552542 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.653760 kubelet[2142]: E0120 00:53:33.653611 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.754059 kubelet[2142]: E0120 00:53:33.754008 2142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:53:33.840883 kubelet[2142]: I0120 00:53:33.840820 2142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:33.848685 kubelet[2142]: I0120 00:53:33.848652 2142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:33.853436 kubelet[2142]: I0120 00:53:33.853409 2142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:34.026012 kubelet[2142]: I0120 00:53:34.025943 2142 apiserver.go:52] "Watching apiserver" Jan 20 00:53:34.040600 kubelet[2142]: I0120 00:53:34.040524 2142 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:53:34.088646 kubelet[2142]: I0120 00:53:34.088588 2142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:34.088646 kubelet[2142]: E0120 00:53:34.088637 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:34.089046 kubelet[2142]: E0120 00:53:34.088920 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:34.094928 kubelet[2142]: E0120 00:53:34.094844 2142 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:34.094987 kubelet[2142]: E0120 00:53:34.094961 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:34.507145 systemd[1]: Reloading requested from client PID 2436 ('systemctl') (unit session-7.scope)... Jan 20 00:53:34.507176 systemd[1]: Reloading... Jan 20 00:53:34.594168 zram_generator::config[2476]: No configuration found. Jan 20 00:53:34.696009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:53:34.775488 systemd[1]: Reloading finished in 267 ms. Jan 20 00:53:34.819585 kubelet[2142]: I0120 00:53:34.819516 2142 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:53:34.819539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:53:34.834863 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:53:34.835184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:34.848353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:53:34.998142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:53:35.002882 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:53:35.049654 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:53:35.049654 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:53:35.049654 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:53:35.049654 kubelet[2520]: I0120 00:53:35.049607 2520 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:53:35.057110 kubelet[2520]: I0120 00:53:35.057027 2520 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:53:35.057110 kubelet[2520]: I0120 00:53:35.057056 2520 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:53:35.057273 kubelet[2520]: I0120 00:53:35.057248 2520 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:53:35.058416 kubelet[2520]: I0120 00:53:35.058401 2520 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 00:53:35.060219 kubelet[2520]: I0120 00:53:35.060202 2520 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:53:35.063953 kubelet[2520]: E0120 00:53:35.063261 2520 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:53:35.063953 kubelet[2520]: I0120 00:53:35.063284 2520 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:53:35.068555 kubelet[2520]: I0120 00:53:35.068501 2520 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:53:35.068873 kubelet[2520]: I0120 00:53:35.068817 2520 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:53:35.069159 kubelet[2520]: I0120 00:53:35.068850 2520 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:53:35.069159 kubelet[2520]: I0120 00:53:35.069158 2520 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:53:35.069276 kubelet[2520]: I0120 00:53:35.069168 2520 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:53:35.069276 kubelet[2520]: I0120 00:53:35.069208 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:53:35.069402 kubelet[2520]: I0120 00:53:35.069364 2520 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:53:35.069402 kubelet[2520]: I0120 00:53:35.069389 2520 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:53:35.069455 kubelet[2520]: I0120 00:53:35.069408 2520 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:53:35.069455 kubelet[2520]: I0120 00:53:35.069422 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:53:35.070554 kubelet[2520]: I0120 00:53:35.070512 2520 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:53:35.070995 kubelet[2520]: I0120 00:53:35.070948 2520 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:53:35.078048 kubelet[2520]: I0120 00:53:35.078006 2520 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:53:35.078129 kubelet[2520]: I0120 00:53:35.078065 2520 server.go:1289] "Started kubelet" Jan 20 00:53:35.078501 kubelet[2520]: I0120 00:53:35.078362 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:53:35.078976 kubelet[2520]: I0120 00:53:35.078890 2520 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:53:35.078976 kubelet[2520]: I0120 00:53:35.078952 2520 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:53:35.079497 kubelet[2520]: I0120 00:53:35.079132 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:53:35.079663 kubelet[2520]: I0120 00:53:35.079634 2520 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:53:35.080216 kubelet[2520]: I0120 00:53:35.080183 2520 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:53:35.082662 kubelet[2520]: I0120 00:53:35.081267 2520 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:53:35.082662 kubelet[2520]: I0120 00:53:35.081342 2520 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:53:35.082662 kubelet[2520]: I0120 00:53:35.081438 2520 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:53:35.083678 kubelet[2520]: I0120 00:53:35.083653 2520 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:53:35.084032 kubelet[2520]: I0120 00:53:35.083967 2520 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:53:35.085580 kubelet[2520]: I0120 00:53:35.085536 2520 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:53:35.085970 kubelet[2520]: E0120 00:53:35.085956 2520 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:53:35.094877 kubelet[2520]: I0120 00:53:35.094760 2520 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:53:35.096962 kubelet[2520]: I0120 00:53:35.096691 2520 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:53:35.096962 kubelet[2520]: I0120 00:53:35.096749 2520 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:53:35.096962 kubelet[2520]: I0120 00:53:35.096767 2520 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:53:35.096962 kubelet[2520]: I0120 00:53:35.096774 2520 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:53:35.096962 kubelet[2520]: E0120 00:53:35.096812 2520 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:53:35.121738 kubelet[2520]: I0120 00:53:35.121643 2520 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:53:35.121738 kubelet[2520]: I0120 00:53:35.121676 2520 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:53:35.121738 kubelet[2520]: I0120 00:53:35.121694 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:53:35.121872 kubelet[2520]: I0120 00:53:35.121829 2520 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:53:35.121872 kubelet[2520]: I0120 00:53:35.121839 2520 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:53:35.121872 kubelet[2520]: I0120 00:53:35.121853 2520 policy_none.go:49] "None policy: Start" Jan 20 00:53:35.121872 kubelet[2520]: I0120 00:53:35.121862 2520 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:53:35.121872 kubelet[2520]: I0120 00:53:35.121872 2520 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:53:35.121987 kubelet[2520]: I0120 00:53:35.121951 2520 state_mem.go:75] "Updated machine memory state" Jan 20 00:53:35.126175 kubelet[2520]: E0120 00:53:35.126152 2520 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:53:35.126346 kubelet[2520]: I0120 00:53:35.126332 2520 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:53:35.126372 kubelet[2520]: I0120 00:53:35.126347 2520 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:53:35.126548 kubelet[2520]: I0120 00:53:35.126533 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:53:35.128587 kubelet[2520]: E0120 00:53:35.128030 2520 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:53:35.198386 kubelet[2520]: I0120 00:53:35.198308 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:35.198386 kubelet[2520]: I0120 00:53:35.198336 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:35.198541 kubelet[2520]: I0120 00:53:35.198429 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:35.206013 kubelet[2520]: E0120 00:53:35.205844 2520 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:35.206282 kubelet[2520]: E0120 00:53:35.206256 2520 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:35.207162 kubelet[2520]: E0120 00:53:35.206512 2520 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:35.234050 kubelet[2520]: I0120 00:53:35.234017 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:53:35.241413 kubelet[2520]: I0120 00:53:35.241396 2520 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:53:35.241509 kubelet[2520]: I0120 00:53:35.241453 2520 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:53:35.283542 kubelet[2520]: I0120 00:53:35.283502 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bb1bef817db63fe8deb2a6433d1cba5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb1bef817db63fe8deb2a6433d1cba5\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:35.283542 kubelet[2520]: I0120 00:53:35.283541 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bb1bef817db63fe8deb2a6433d1cba5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb1bef817db63fe8deb2a6433d1cba5\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:35.283542 kubelet[2520]: I0120 00:53:35.283560 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:35.283542 kubelet[2520]: I0120 00:53:35.283573 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:35.283782 kubelet[2520]: I0120 00:53:35.283586 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:35.283782 kubelet[2520]: I0120 00:53:35.283598 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bb1bef817db63fe8deb2a6433d1cba5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bb1bef817db63fe8deb2a6433d1cba5\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:35.283782 kubelet[2520]: I0120 00:53:35.283612 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:35.283782 kubelet[2520]: I0120 00:53:35.283626 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:35.283868 kubelet[2520]: I0120 00:53:35.283794 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:53:35.507183 kubelet[2520]: E0120 00:53:35.507130 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:35.508013 kubelet[2520]: E0120 00:53:35.507942 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:35.508013 kubelet[2520]: E0120 00:53:35.507949 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:36.070280 kubelet[2520]: I0120 00:53:36.070204 2520 apiserver.go:52] "Watching apiserver" Jan 20 00:53:36.082809 kubelet[2520]: I0120 00:53:36.081863 2520 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:53:36.113937 kubelet[2520]: I0120 00:53:36.113323 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:36.113937 kubelet[2520]: E0120 00:53:36.113322 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:36.114121 kubelet[2520]: I0120 00:53:36.113952 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:36.121433 kubelet[2520]: E0120 00:53:36.121371 2520 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:53:36.121860 kubelet[2520]: E0120 00:53:36.121841 2520 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:53:36.122140 kubelet[2520]: E0120 00:53:36.121970 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:36.122140 kubelet[2520]: E0120 00:53:36.122034 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:36.142805 kubelet[2520]: I0120 00:53:36.142657 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.142587482 podStartE2EDuration="3.142587482s" podCreationTimestamp="2026-01-20 00:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:53:36.134818118 +0000 UTC m=+1.127581645" watchObservedRunningTime="2026-01-20 00:53:36.142587482 +0000 UTC m=+1.135351019" Jan 20 00:53:36.150650 kubelet[2520]: I0120 00:53:36.150491 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.150476688 podStartE2EDuration="3.150476688s" podCreationTimestamp="2026-01-20 00:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:53:36.142972727 +0000 UTC m=+1.135736304" watchObservedRunningTime="2026-01-20 00:53:36.150476688 +0000 UTC m=+1.143240215" Jan 20 00:53:36.159828 kubelet[2520]: I0120 00:53:36.159704 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.159691549 podStartE2EDuration="3.159691549s" podCreationTimestamp="2026-01-20 00:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:53:36.150962303 +0000 UTC m=+1.143725829" watchObservedRunningTime="2026-01-20 00:53:36.159691549 +0000 UTC m=+1.152455076" Jan 20 00:53:37.115469 kubelet[2520]: E0120 00:53:37.115370 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:37.115469 kubelet[2520]: E0120 00:53:37.115459 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:38.933880 kubelet[2520]: E0120 00:53:38.933807 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:39.352998 kubelet[2520]: I0120 00:53:39.352886 2520 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:53:39.353498 containerd[1461]: time="2026-01-20T00:53:39.353430335Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:53:39.353951 kubelet[2520]: I0120 00:53:39.353835 2520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:53:39.743631 kubelet[2520]: E0120 00:53:39.743573 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:40.431576 systemd[1]: Created slice kubepods-besteffort-podd9286af5_e982_45a1_9a0f_177bd1912325.slice - libcontainer container kubepods-besteffort-podd9286af5_e982_45a1_9a0f_177bd1912325.slice. Jan 20 00:53:40.517801 kubelet[2520]: I0120 00:53:40.517675 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9286af5-e982-45a1-9a0f-177bd1912325-kube-proxy\") pod \"kube-proxy-klxgn\" (UID: \"d9286af5-e982-45a1-9a0f-177bd1912325\") " pod="kube-system/kube-proxy-klxgn" Jan 20 00:53:40.517801 kubelet[2520]: I0120 00:53:40.517740 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9286af5-e982-45a1-9a0f-177bd1912325-lib-modules\") pod \"kube-proxy-klxgn\" (UID: \"d9286af5-e982-45a1-9a0f-177bd1912325\") " pod="kube-system/kube-proxy-klxgn" Jan 20 00:53:40.517801 kubelet[2520]: I0120 00:53:40.517756 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xfxn\" (UniqueName: \"kubernetes.io/projected/d9286af5-e982-45a1-9a0f-177bd1912325-kube-api-access-4xfxn\") pod \"kube-proxy-klxgn\" (UID: \"d9286af5-e982-45a1-9a0f-177bd1912325\") " pod="kube-system/kube-proxy-klxgn" Jan 20 00:53:40.517801 kubelet[2520]: I0120 00:53:40.517773 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9286af5-e982-45a1-9a0f-177bd1912325-xtables-lock\") pod \"kube-proxy-klxgn\" (UID: \"d9286af5-e982-45a1-9a0f-177bd1912325\") " pod="kube-system/kube-proxy-klxgn" Jan 20 00:53:40.591317 systemd[1]: Created slice kubepods-besteffort-podcdd6a61c_cf6e_453d_bed0_7b43b860a45f.slice - libcontainer container kubepods-besteffort-podcdd6a61c_cf6e_453d_bed0_7b43b860a45f.slice. Jan 20 00:53:40.618835 kubelet[2520]: I0120 00:53:40.618773 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmn5\" (UniqueName: \"kubernetes.io/projected/cdd6a61c-cf6e-453d-bed0-7b43b860a45f-kube-api-access-vkmn5\") pod \"tigera-operator-7dcd859c48-2bzt2\" (UID: \"cdd6a61c-cf6e-453d-bed0-7b43b860a45f\") " pod="tigera-operator/tigera-operator-7dcd859c48-2bzt2" Jan 20 00:53:40.618835 kubelet[2520]: I0120 00:53:40.618839 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cdd6a61c-cf6e-453d-bed0-7b43b860a45f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2bzt2\" (UID: \"cdd6a61c-cf6e-453d-bed0-7b43b860a45f\") " pod="tigera-operator/tigera-operator-7dcd859c48-2bzt2" Jan 20 00:53:40.744783 kubelet[2520]: E0120 00:53:40.744706 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:40.745368 containerd[1461]: time="2026-01-20T00:53:40.745335797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-klxgn,Uid:d9286af5-e982-45a1-9a0f-177bd1912325,Namespace:kube-system,Attempt:0,}" Jan 20 00:53:40.772169 containerd[1461]: time="2026-01-20T00:53:40.772020624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:53:40.772990 containerd[1461]: time="2026-01-20T00:53:40.772220306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:53:40.772990 containerd[1461]: time="2026-01-20T00:53:40.772965711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:40.773186 containerd[1461]: time="2026-01-20T00:53:40.773064144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:40.798268 systemd[1]: Started cri-containerd-60d5f078df454218c4c4fea19a31a4e62f8c2ffa03d88a6829a28031f5cd304f.scope - libcontainer container 60d5f078df454218c4c4fea19a31a4e62f8c2ffa03d88a6829a28031f5cd304f. Jan 20 00:53:40.828058 containerd[1461]: time="2026-01-20T00:53:40.828009613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-klxgn,Uid:d9286af5-e982-45a1-9a0f-177bd1912325,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d5f078df454218c4c4fea19a31a4e62f8c2ffa03d88a6829a28031f5cd304f\"" Jan 20 00:53:40.828821 kubelet[2520]: E0120 00:53:40.828771 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:40.834349 containerd[1461]: time="2026-01-20T00:53:40.834299381Z" level=info msg="CreateContainer within sandbox \"60d5f078df454218c4c4fea19a31a4e62f8c2ffa03d88a6829a28031f5cd304f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:53:40.851355 containerd[1461]: time="2026-01-20T00:53:40.851292696Z" level=info msg="CreateContainer within sandbox \"60d5f078df454218c4c4fea19a31a4e62f8c2ffa03d88a6829a28031f5cd304f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2d4175b43b2087c5aa4257d5a60682de6cce391dea84326493b61f0f4e2658d\"" Jan 20 00:53:40.852022 containerd[1461]: time="2026-01-20T00:53:40.851992070Z" level=info msg="StartContainer for \"b2d4175b43b2087c5aa4257d5a60682de6cce391dea84326493b61f0f4e2658d\"" Jan 20 00:53:40.888262 systemd[1]: Started cri-containerd-b2d4175b43b2087c5aa4257d5a60682de6cce391dea84326493b61f0f4e2658d.scope - libcontainer container b2d4175b43b2087c5aa4257d5a60682de6cce391dea84326493b61f0f4e2658d. Jan 20 00:53:40.898388 containerd[1461]: time="2026-01-20T00:53:40.898308701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2bzt2,Uid:cdd6a61c-cf6e-453d-bed0-7b43b860a45f,Namespace:tigera-operator,Attempt:0,}" Jan 20 00:53:40.928865 containerd[1461]: time="2026-01-20T00:53:40.928800344Z" level=info msg="StartContainer for \"b2d4175b43b2087c5aa4257d5a60682de6cce391dea84326493b61f0f4e2658d\" returns successfully" Jan 20 00:53:40.935953 containerd[1461]: time="2026-01-20T00:53:40.935830766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:53:40.935953 containerd[1461]: time="2026-01-20T00:53:40.935917476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:53:40.936203 containerd[1461]: time="2026-01-20T00:53:40.935931753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:40.937982 containerd[1461]: time="2026-01-20T00:53:40.937907943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:40.968308 systemd[1]: Started cri-containerd-3c93758c06d690fe4708072d9a8aacbca4d6388ea94e47aded6933b88b80b4f8.scope - libcontainer container 3c93758c06d690fe4708072d9a8aacbca4d6388ea94e47aded6933b88b80b4f8. Jan 20 00:53:41.010821 containerd[1461]: time="2026-01-20T00:53:41.010620171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2bzt2,Uid:cdd6a61c-cf6e-453d-bed0-7b43b860a45f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3c93758c06d690fe4708072d9a8aacbca4d6388ea94e47aded6933b88b80b4f8\"" Jan 20 00:53:41.014771 containerd[1461]: time="2026-01-20T00:53:41.014653102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 00:53:41.123607 kubelet[2520]: E0120 00:53:41.123512 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:41.131375 kubelet[2520]: I0120 00:53:41.131304 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-klxgn" podStartSLOduration=1.131288681 podStartE2EDuration="1.131288681s" podCreationTimestamp="2026-01-20 00:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:53:41.131215579 +0000 UTC m=+6.123979106" watchObservedRunningTime="2026-01-20 00:53:41.131288681 +0000 UTC m=+6.124052208" Jan 20 00:53:42.224880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483720797.mount: Deactivated successfully. Jan 20 00:53:42.756405 kubelet[2520]: E0120 00:53:42.756286 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:43.070396 containerd[1461]: time="2026-01-20T00:53:43.070217181Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:43.071333 containerd[1461]: time="2026-01-20T00:53:43.071295197Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 00:53:43.072807 containerd[1461]: time="2026-01-20T00:53:43.072774498Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:43.075237 containerd[1461]: time="2026-01-20T00:53:43.075193240Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:43.075994 containerd[1461]: time="2026-01-20T00:53:43.075914914Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.061190739s" Jan 20 00:53:43.075994 containerd[1461]: time="2026-01-20T00:53:43.075980306Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 00:53:43.080006 containerd[1461]: time="2026-01-20T00:53:43.079921511Z" level=info msg="CreateContainer within sandbox \"3c93758c06d690fe4708072d9a8aacbca4d6388ea94e47aded6933b88b80b4f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 00:53:43.092731 containerd[1461]: time="2026-01-20T00:53:43.092648339Z" level=info msg="CreateContainer within sandbox \"3c93758c06d690fe4708072d9a8aacbca4d6388ea94e47aded6933b88b80b4f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d4bb41e6a70cad138417eb4244c8e079098622316251cf30687629b3179220ce\"" Jan 20 00:53:43.093269 containerd[1461]: time="2026-01-20T00:53:43.093198054Z" level=info msg="StartContainer for \"d4bb41e6a70cad138417eb4244c8e079098622316251cf30687629b3179220ce\"" Jan 20 00:53:43.129035 kubelet[2520]: E0120 00:53:43.129010 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:43.136255 systemd[1]: Started cri-containerd-d4bb41e6a70cad138417eb4244c8e079098622316251cf30687629b3179220ce.scope - libcontainer container d4bb41e6a70cad138417eb4244c8e079098622316251cf30687629b3179220ce. Jan 20 00:53:43.168314 containerd[1461]: time="2026-01-20T00:53:43.168232997Z" level=info msg="StartContainer for \"d4bb41e6a70cad138417eb4244c8e079098622316251cf30687629b3179220ce\" returns successfully" Jan 20 00:53:44.132688 kubelet[2520]: E0120 00:53:44.132631 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:48.319243 sudo[1636]: pam_unix(sudo:session): session closed for user root Jan 20 00:53:48.325561 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 20 00:53:48.332774 systemd[1]: sshd@6-10.0.0.160:22-10.0.0.1:47486.service: Deactivated successfully. Jan 20 00:53:48.336069 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:53:48.341176 systemd[1]: session-7.scope: Consumed 8.108s CPU time, 160.9M memory peak, 0B memory swap peak. Jan 20 00:53:48.346417 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:53:48.348547 systemd-logind[1444]: Removed session 7. Jan 20 00:53:48.352346 update_engine[1447]: I20260120 00:53:48.352308 1447 update_attempter.cc:509] Updating boot flags... Jan 20 00:53:48.421899 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2940) Jan 20 00:53:48.481298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2942) Jan 20 00:53:48.534157 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2942) Jan 20 00:53:48.939806 kubelet[2520]: E0120 00:53:48.939745 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:48.947979 kubelet[2520]: I0120 00:53:48.947527 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2bzt2" podStartSLOduration=6.885028935 podStartE2EDuration="8.94751707s" podCreationTimestamp="2026-01-20 00:53:40 +0000 UTC" firstStartedPulling="2026-01-20 00:53:41.014266425 +0000 UTC m=+6.007029951" lastFinishedPulling="2026-01-20 00:53:43.076754559 +0000 UTC m=+8.069518086" observedRunningTime="2026-01-20 00:53:44.141370031 +0000 UTC m=+9.134133558" watchObservedRunningTime="2026-01-20 00:53:48.94751707 +0000 UTC m=+13.940280596" Jan 20 00:53:49.757579 kubelet[2520]: E0120 00:53:49.757519 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:52.385286 systemd[1]: Created slice kubepods-besteffort-podf17d94b1_6f72_4a29_ad2b_7b0ebb1fd830.slice - libcontainer container kubepods-besteffort-podf17d94b1_6f72_4a29_ad2b_7b0ebb1fd830.slice. Jan 20 00:53:52.488739 kubelet[2520]: I0120 00:53:52.488625 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwdn\" (UniqueName: \"kubernetes.io/projected/f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830-kube-api-access-2jwdn\") pod \"calico-typha-7b98c7cf6c-qzmtt\" (UID: \"f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830\") " pod="calico-system/calico-typha-7b98c7cf6c-qzmtt" Jan 20 00:53:52.488739 kubelet[2520]: I0120 00:53:52.488726 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830-tigera-ca-bundle\") pod \"calico-typha-7b98c7cf6c-qzmtt\" (UID: \"f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830\") " pod="calico-system/calico-typha-7b98c7cf6c-qzmtt" Jan 20 00:53:52.489270 kubelet[2520]: I0120 00:53:52.488805 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830-typha-certs\") pod \"calico-typha-7b98c7cf6c-qzmtt\" (UID: \"f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830\") " pod="calico-system/calico-typha-7b98c7cf6c-qzmtt" Jan 20 00:53:52.554543 systemd[1]: Created slice kubepods-besteffort-podd4ed6a26_498b_4f34_bf24_1c434187ba66.slice - libcontainer container kubepods-besteffort-podd4ed6a26_498b_4f34_bf24_1c434187ba66.slice. Jan 20 00:53:52.590211 kubelet[2520]: I0120 00:53:52.590171 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-policysync\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.590383 kubelet[2520]: I0120 00:53:52.590219 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ed6a26-498b-4f34-bf24-1c434187ba66-tigera-ca-bundle\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.590383 kubelet[2520]: I0120 00:53:52.590252 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-flexvol-driver-host\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.590383 kubelet[2520]: I0120 00:53:52.590271 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-var-lib-calico\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.590383 kubelet[2520]: I0120 00:53:52.590353 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-var-run-calico\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.590475 kubelet[2520]: I0120 00:53:52.590450 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d4ed6a26-498b-4f34-bf24-1c434187ba66-node-certs\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.591052 kubelet[2520]: I0120 00:53:52.590739 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-cni-log-dir\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.591052 kubelet[2520]: I0120 00:53:52.590761 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-cni-net-dir\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.591052 kubelet[2520]: I0120 00:53:52.590775 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-lib-modules\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.591052 kubelet[2520]: I0120 00:53:52.590792 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr849\" (UniqueName: \"kubernetes.io/projected/d4ed6a26-498b-4f34-bf24-1c434187ba66-kube-api-access-kr849\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.591052 kubelet[2520]: I0120 00:53:52.590817 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-cni-bin-dir\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.591244 kubelet[2520]: I0120 00:53:52.590830 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4ed6a26-498b-4f34-bf24-1c434187ba66-xtables-lock\") pod \"calico-node-qw9tp\" (UID: \"d4ed6a26-498b-4f34-bf24-1c434187ba66\") " pod="calico-system/calico-node-qw9tp" Jan 20 00:53:52.693916 kubelet[2520]: E0120 00:53:52.693876 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.693916 kubelet[2520]: W0120 00:53:52.693906 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.695740 kubelet[2520]: E0120 00:53:52.694733 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.695740 kubelet[2520]: E0120 00:53:52.694812 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:52.695819 containerd[1461]: time="2026-01-20T00:53:52.695628771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b98c7cf6c-qzmtt,Uid:f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830,Namespace:calico-system,Attempt:0,}" Jan 20 00:53:52.696162 kubelet[2520]: E0120 00:53:52.695770 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.696162 kubelet[2520]: W0120 00:53:52.695785 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.696162 kubelet[2520]: E0120 00:53:52.695801 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.702190 kubelet[2520]: E0120 00:53:52.701701 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.702190 kubelet[2520]: W0120 00:53:52.701729 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.702190 kubelet[2520]: E0120 00:53:52.701741 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.727273 containerd[1461]: time="2026-01-20T00:53:52.727128667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:53:52.727273 containerd[1461]: time="2026-01-20T00:53:52.727191555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:53:52.727273 containerd[1461]: time="2026-01-20T00:53:52.727202305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:52.727569 containerd[1461]: time="2026-01-20T00:53:52.727292362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:52.759470 systemd[1]: Started cri-containerd-a01156b131e366d74989ea2a7293ea09866b7c07c80749d7d535e4bd69796bbc.scope - libcontainer container a01156b131e366d74989ea2a7293ea09866b7c07c80749d7d535e4bd69796bbc. Jan 20 00:53:52.761407 kubelet[2520]: E0120 00:53:52.761353 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:53:52.792723 kubelet[2520]: E0120 00:53:52.792535 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.792723 kubelet[2520]: W0120 00:53:52.792557 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.792723 kubelet[2520]: E0120 00:53:52.792576 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.792977 kubelet[2520]: E0120 00:53:52.792928 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.792977 kubelet[2520]: W0120 00:53:52.792958 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.792977 kubelet[2520]: E0120 00:53:52.792977 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.793418 kubelet[2520]: E0120 00:53:52.793403 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.793632 kubelet[2520]: W0120 00:53:52.793469 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.793632 kubelet[2520]: E0120 00:53:52.793483 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.794245 kubelet[2520]: E0120 00:53:52.794107 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.794245 kubelet[2520]: W0120 00:53:52.794120 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.794245 kubelet[2520]: E0120 00:53:52.794132 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.794770 kubelet[2520]: E0120 00:53:52.794625 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.794770 kubelet[2520]: W0120 00:53:52.794665 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.794770 kubelet[2520]: E0120 00:53:52.794676 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.795026 kubelet[2520]: E0120 00:53:52.795014 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.795243 kubelet[2520]: W0120 00:53:52.795132 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.795243 kubelet[2520]: E0120 00:53:52.795147 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.795537 kubelet[2520]: E0120 00:53:52.795416 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.795537 kubelet[2520]: W0120 00:53:52.795426 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.795537 kubelet[2520]: E0120 00:53:52.795435 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.795722 kubelet[2520]: E0120 00:53:52.795710 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.795781 kubelet[2520]: W0120 00:53:52.795770 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.795827 kubelet[2520]: E0120 00:53:52.795816 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.796152 kubelet[2520]: E0120 00:53:52.796140 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.796214 kubelet[2520]: W0120 00:53:52.796204 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.796260 kubelet[2520]: E0120 00:53:52.796250 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.796752 kubelet[2520]: E0120 00:53:52.796612 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.796752 kubelet[2520]: W0120 00:53:52.796623 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.796752 kubelet[2520]: E0120 00:53:52.796631 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.796962 kubelet[2520]: E0120 00:53:52.796905 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.797052 kubelet[2520]: W0120 00:53:52.797004 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.797234 kubelet[2520]: E0120 00:53:52.797130 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.797552 kubelet[2520]: E0120 00:53:52.797540 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.797701 kubelet[2520]: W0120 00:53:52.797621 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.797701 kubelet[2520]: E0120 00:53:52.797657 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.798112 kubelet[2520]: E0120 00:53:52.798063 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.798273 kubelet[2520]: W0120 00:53:52.798180 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.798273 kubelet[2520]: E0120 00:53:52.798194 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.798714 kubelet[2520]: E0120 00:53:52.798616 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.798714 kubelet[2520]: W0120 00:53:52.798626 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.798714 kubelet[2520]: E0120 00:53:52.798673 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.799189 kubelet[2520]: E0120 00:53:52.799177 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.799336 kubelet[2520]: W0120 00:53:52.799282 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.799336 kubelet[2520]: E0120 00:53:52.799296 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.799864 kubelet[2520]: E0120 00:53:52.799721 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.799864 kubelet[2520]: W0120 00:53:52.799731 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.799864 kubelet[2520]: E0120 00:53:52.799739 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.800025 kubelet[2520]: E0120 00:53:52.800015 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.800125 kubelet[2520]: W0120 00:53:52.800065 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.800190 kubelet[2520]: E0120 00:53:52.800180 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.800496 kubelet[2520]: E0120 00:53:52.800485 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.800560 kubelet[2520]: W0120 00:53:52.800550 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.800601 kubelet[2520]: E0120 00:53:52.800591 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.801028 kubelet[2520]: E0120 00:53:52.800969 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.801028 kubelet[2520]: W0120 00:53:52.800980 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.801028 kubelet[2520]: E0120 00:53:52.800988 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.801554 kubelet[2520]: E0120 00:53:52.801391 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.801554 kubelet[2520]: W0120 00:53:52.801401 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.801554 kubelet[2520]: E0120 00:53:52.801409 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.801930 kubelet[2520]: E0120 00:53:52.801916 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.801985 kubelet[2520]: W0120 00:53:52.801975 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.802047 kubelet[2520]: E0120 00:53:52.802015 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.802182 kubelet[2520]: I0120 00:53:52.802060 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2f383b2b-693c-42c3-b0a3-10cbb7e70071-registration-dir\") pod \"csi-node-driver-hm58c\" (UID: \"2f383b2b-693c-42c3-b0a3-10cbb7e70071\") " pod="calico-system/csi-node-driver-hm58c" Jan 20 00:53:52.802513 kubelet[2520]: E0120 00:53:52.802491 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.802513 kubelet[2520]: W0120 00:53:52.802509 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.802513 kubelet[2520]: E0120 00:53:52.802519 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.802618 containerd[1461]: time="2026-01-20T00:53:52.802573438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b98c7cf6c-qzmtt,Uid:f17d94b1-6f72-4a29-ad2b-7b0ebb1fd830,Namespace:calico-system,Attempt:0,} returns sandbox id \"a01156b131e366d74989ea2a7293ea09866b7c07c80749d7d535e4bd69796bbc\"" Jan 20 00:53:52.802936 kubelet[2520]: E0120 00:53:52.802848 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.802936 kubelet[2520]: W0120 00:53:52.802874 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.802936 kubelet[2520]: E0120 00:53:52.802888 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.803305 kubelet[2520]: E0120 00:53:52.803261 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.803305 kubelet[2520]: W0120 00:53:52.803288 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.803305 kubelet[2520]: E0120 00:53:52.803297 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.803425 kubelet[2520]: I0120 00:53:52.803319 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hnm\" (UniqueName: \"kubernetes.io/projected/2f383b2b-693c-42c3-b0a3-10cbb7e70071-kube-api-access-24hnm\") pod \"csi-node-driver-hm58c\" (UID: \"2f383b2b-693c-42c3-b0a3-10cbb7e70071\") " pod="calico-system/csi-node-driver-hm58c" Jan 20 00:53:52.803507 kubelet[2520]: E0120 00:53:52.803454 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:52.803690 kubelet[2520]: E0120 00:53:52.803624 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.803690 kubelet[2520]: W0120 00:53:52.803665 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.803690 kubelet[2520]: E0120 00:53:52.803675 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.803690 kubelet[2520]: I0120 00:53:52.803690 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f383b2b-693c-42c3-b0a3-10cbb7e70071-kubelet-dir\") pod \"csi-node-driver-hm58c\" (UID: \"2f383b2b-693c-42c3-b0a3-10cbb7e70071\") " pod="calico-system/csi-node-driver-hm58c" Jan 20 00:53:52.804021 kubelet[2520]: E0120 00:53:52.803961 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.804021 kubelet[2520]: W0120 00:53:52.803986 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.804021 kubelet[2520]: E0120 00:53:52.803994 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.804021 kubelet[2520]: I0120 00:53:52.804007 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2f383b2b-693c-42c3-b0a3-10cbb7e70071-varrun\") pod \"csi-node-driver-hm58c\" (UID: \"2f383b2b-693c-42c3-b0a3-10cbb7e70071\") " pod="calico-system/csi-node-driver-hm58c" Jan 20 00:53:52.804352 kubelet[2520]: E0120 00:53:52.804309 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.804352 kubelet[2520]: W0120 00:53:52.804336 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.804352 kubelet[2520]: E0120 00:53:52.804345 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.804596 kubelet[2520]: I0120 00:53:52.804357 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2f383b2b-693c-42c3-b0a3-10cbb7e70071-socket-dir\") pod \"csi-node-driver-hm58c\" (UID: \"2f383b2b-693c-42c3-b0a3-10cbb7e70071\") " pod="calico-system/csi-node-driver-hm58c" Jan 20 00:53:52.804797 kubelet[2520]: E0120 00:53:52.804713 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.804797 kubelet[2520]: W0120 00:53:52.804735 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.804797 kubelet[2520]: E0120 00:53:52.804744 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.805364 kubelet[2520]: E0120 00:53:52.805253 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.805364 kubelet[2520]: W0120 00:53:52.805275 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.805364 kubelet[2520]: E0120 00:53:52.805284 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.805824 kubelet[2520]: E0120 00:53:52.805687 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.805824 kubelet[2520]: W0120 00:53:52.805697 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.805824 kubelet[2520]: E0120 00:53:52.805706 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.806015 containerd[1461]: time="2026-01-20T00:53:52.805700458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 00:53:52.806051 kubelet[2520]: E0120 00:53:52.806011 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.806308 kubelet[2520]: W0120 00:53:52.806294 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.806308 kubelet[2520]: E0120 00:53:52.806309 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.806728 kubelet[2520]: E0120 00:53:52.806706 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.806728 kubelet[2520]: W0120 00:53:52.806725 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.806812 kubelet[2520]: E0120 00:53:52.806733 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.807248 kubelet[2520]: E0120 00:53:52.807193 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.807248 kubelet[2520]: W0120 00:53:52.807212 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.807248 kubelet[2520]: E0120 00:53:52.807223 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.807536 kubelet[2520]: E0120 00:53:52.807483 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.807536 kubelet[2520]: W0120 00:53:52.807504 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.807536 kubelet[2520]: E0120 00:53:52.807512 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.807827 kubelet[2520]: E0120 00:53:52.807799 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.807827 kubelet[2520]: W0120 00:53:52.807813 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.807827 kubelet[2520]: E0120 00:53:52.807824 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.858431 kubelet[2520]: E0120 00:53:52.858337 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:52.859016 containerd[1461]: time="2026-01-20T00:53:52.858924052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qw9tp,Uid:d4ed6a26-498b-4f34-bf24-1c434187ba66,Namespace:calico-system,Attempt:0,}" Jan 20 00:53:52.884536 containerd[1461]: time="2026-01-20T00:53:52.884416819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:53:52.884536 containerd[1461]: time="2026-01-20T00:53:52.884507098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:53:52.884536 containerd[1461]: time="2026-01-20T00:53:52.884530721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:52.884743 containerd[1461]: time="2026-01-20T00:53:52.884688306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:53:52.905003 kubelet[2520]: E0120 00:53:52.904973 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.905003 kubelet[2520]: W0120 00:53:52.904998 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.905144 kubelet[2520]: E0120 00:53:52.905015 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.905265 systemd[1]: Started cri-containerd-d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439.scope - libcontainer container d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439. Jan 20 00:53:52.905854 kubelet[2520]: E0120 00:53:52.905805 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.905894 kubelet[2520]: W0120 00:53:52.905866 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.905972 kubelet[2520]: E0120 00:53:52.905953 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.906680 kubelet[2520]: E0120 00:53:52.906627 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.906680 kubelet[2520]: W0120 00:53:52.906661 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.906680 kubelet[2520]: E0120 00:53:52.906671 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.906970 kubelet[2520]: E0120 00:53:52.906944 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.906970 kubelet[2520]: W0120 00:53:52.906955 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.906970 kubelet[2520]: E0120 00:53:52.906963 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.907354 kubelet[2520]: E0120 00:53:52.907249 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.907354 kubelet[2520]: W0120 00:53:52.907275 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.907354 kubelet[2520]: E0120 00:53:52.907284 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.907632 kubelet[2520]: E0120 00:53:52.907568 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.907632 kubelet[2520]: W0120 00:53:52.907577 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.907632 kubelet[2520]: E0120 00:53:52.907585 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.908031 kubelet[2520]: E0120 00:53:52.907938 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.908031 kubelet[2520]: W0120 00:53:52.907967 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.908031 kubelet[2520]: E0120 00:53:52.907977 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.908999 kubelet[2520]: E0120 00:53:52.908948 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.908999 kubelet[2520]: W0120 00:53:52.908976 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.908999 kubelet[2520]: E0120 00:53:52.908986 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.909362 kubelet[2520]: E0120 00:53:52.909324 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.909362 kubelet[2520]: W0120 00:53:52.909337 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.909362 kubelet[2520]: E0120 00:53:52.909346 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.909804 kubelet[2520]: E0120 00:53:52.909751 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.909804 kubelet[2520]: W0120 00:53:52.909777 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.909804 kubelet[2520]: E0120 00:53:52.909785 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.910251 kubelet[2520]: E0120 00:53:52.910055 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.910251 kubelet[2520]: W0120 00:53:52.910122 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.910251 kubelet[2520]: E0120 00:53:52.910132 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.910592 kubelet[2520]: E0120 00:53:52.910543 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.910592 kubelet[2520]: W0120 00:53:52.910572 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.910665 kubelet[2520]: E0120 00:53:52.910581 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.911218 kubelet[2520]: E0120 00:53:52.911169 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.911218 kubelet[2520]: W0120 00:53:52.911195 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.911271 kubelet[2520]: E0120 00:53:52.911205 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.911670 kubelet[2520]: E0120 00:53:52.911604 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.911670 kubelet[2520]: W0120 00:53:52.911630 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.911670 kubelet[2520]: E0120 00:53:52.911660 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.911993 kubelet[2520]: E0120 00:53:52.911978 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.911993 kubelet[2520]: W0120 00:53:52.911990 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.912043 kubelet[2520]: E0120 00:53:52.911998 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.912810 kubelet[2520]: E0120 00:53:52.912336 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.912810 kubelet[2520]: W0120 00:53:52.912346 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.912810 kubelet[2520]: E0120 00:53:52.912354 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.912916 kubelet[2520]: E0120 00:53:52.912838 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.912916 kubelet[2520]: W0120 00:53:52.912846 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.912916 kubelet[2520]: E0120 00:53:52.912855 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.913325 kubelet[2520]: E0120 00:53:52.913298 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.913325 kubelet[2520]: W0120 00:53:52.913314 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.913325 kubelet[2520]: E0120 00:53:52.913323 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.913602 kubelet[2520]: E0120 00:53:52.913566 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.913602 kubelet[2520]: W0120 00:53:52.913594 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.913602 kubelet[2520]: E0120 00:53:52.913604 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.913951 kubelet[2520]: E0120 00:53:52.913897 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.913951 kubelet[2520]: W0120 00:53:52.913923 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.913951 kubelet[2520]: E0120 00:53:52.913932 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.914342 kubelet[2520]: E0120 00:53:52.914286 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.914342 kubelet[2520]: W0120 00:53:52.914313 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.914342 kubelet[2520]: E0120 00:53:52.914321 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.914629 kubelet[2520]: E0120 00:53:52.914595 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.914629 kubelet[2520]: W0120 00:53:52.914606 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.914629 kubelet[2520]: E0120 00:53:52.914614 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.915762 kubelet[2520]: E0120 00:53:52.915592 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.915762 kubelet[2520]: W0120 00:53:52.915620 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.915762 kubelet[2520]: E0120 00:53:52.915630 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.916226 kubelet[2520]: E0120 00:53:52.916154 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.916226 kubelet[2520]: W0120 00:53:52.916179 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.916226 kubelet[2520]: E0120 00:53:52.916188 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.916859 kubelet[2520]: E0120 00:53:52.916793 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.916859 kubelet[2520]: W0120 00:53:52.916823 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.916859 kubelet[2520]: E0120 00:53:52.916834 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.926940 kubelet[2520]: E0120 00:53:52.926875 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:52.926940 kubelet[2520]: W0120 00:53:52.926918 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:52.926940 kubelet[2520]: E0120 00:53:52.926930 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:52.940168 containerd[1461]: time="2026-01-20T00:53:52.940127413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qw9tp,Uid:d4ed6a26-498b-4f34-bf24-1c434187ba66,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439\"" Jan 20 00:53:52.940876 kubelet[2520]: E0120 00:53:52.940828 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:53.728381 containerd[1461]: time="2026-01-20T00:53:53.728311185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:53.729179 containerd[1461]: time="2026-01-20T00:53:53.729109546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 00:53:53.730338 containerd[1461]: time="2026-01-20T00:53:53.730272193Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:53.732546 containerd[1461]: time="2026-01-20T00:53:53.732484756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:53.733187 containerd[1461]: time="2026-01-20T00:53:53.733141485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 927.413787ms" Jan 20 00:53:53.733187 containerd[1461]: time="2026-01-20T00:53:53.733180337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 00:53:53.737913 containerd[1461]: time="2026-01-20T00:53:53.737853207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 00:53:53.762706 containerd[1461]: time="2026-01-20T00:53:53.762524077Z" level=info msg="CreateContainer within sandbox \"a01156b131e366d74989ea2a7293ea09866b7c07c80749d7d535e4bd69796bbc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 00:53:53.784659 containerd[1461]: time="2026-01-20T00:53:53.784561208Z" level=info msg="CreateContainer within sandbox \"a01156b131e366d74989ea2a7293ea09866b7c07c80749d7d535e4bd69796bbc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"531cf358ebfb628d77d6c1f6d2a07688dafaf5b41d50ce5c585e1fd1b0cef865\"" Jan 20 00:53:53.785180 containerd[1461]: time="2026-01-20T00:53:53.785154941Z" level=info msg="StartContainer for \"531cf358ebfb628d77d6c1f6d2a07688dafaf5b41d50ce5c585e1fd1b0cef865\"" Jan 20 00:53:53.816291 systemd[1]: Started cri-containerd-531cf358ebfb628d77d6c1f6d2a07688dafaf5b41d50ce5c585e1fd1b0cef865.scope - libcontainer container 531cf358ebfb628d77d6c1f6d2a07688dafaf5b41d50ce5c585e1fd1b0cef865. Jan 20 00:53:53.863119 containerd[1461]: time="2026-01-20T00:53:53.863039689Z" level=info msg="StartContainer for \"531cf358ebfb628d77d6c1f6d2a07688dafaf5b41d50ce5c585e1fd1b0cef865\" returns successfully" Jan 20 00:53:54.101589 kubelet[2520]: E0120 00:53:54.101433 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:53:54.155533 kubelet[2520]: E0120 00:53:54.155472 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:54.168759 kubelet[2520]: I0120 00:53:54.168676 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b98c7cf6c-qzmtt" podStartSLOduration=1.236282323 podStartE2EDuration="2.168662001s" podCreationTimestamp="2026-01-20 00:53:52 +0000 UTC" firstStartedPulling="2026-01-20 00:53:52.80532633 +0000 UTC m=+17.798089857" lastFinishedPulling="2026-01-20 00:53:53.737705948 +0000 UTC m=+18.730469535" observedRunningTime="2026-01-20 00:53:54.168425501 +0000 UTC m=+19.161189028" watchObservedRunningTime="2026-01-20 00:53:54.168662001 +0000 UTC m=+19.161425528" Jan 20 00:53:54.211312 kubelet[2520]: E0120 00:53:54.211280 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.211312 kubelet[2520]: W0120 00:53:54.211306 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.211923 kubelet[2520]: E0120 00:53:54.211870 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.212304 kubelet[2520]: E0120 00:53:54.212279 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.212304 kubelet[2520]: W0120 00:53:54.212303 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.212367 kubelet[2520]: E0120 00:53:54.212319 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.212621 kubelet[2520]: E0120 00:53:54.212599 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.212621 kubelet[2520]: W0120 00:53:54.212618 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.212735 kubelet[2520]: E0120 00:53:54.212627 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.212971 kubelet[2520]: E0120 00:53:54.212930 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.212971 kubelet[2520]: W0120 00:53:54.212954 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.212971 kubelet[2520]: E0120 00:53:54.212963 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.213320 kubelet[2520]: E0120 00:53:54.213295 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.213320 kubelet[2520]: W0120 00:53:54.213314 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.213377 kubelet[2520]: E0120 00:53:54.213322 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.213696 kubelet[2520]: E0120 00:53:54.213615 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.213696 kubelet[2520]: W0120 00:53:54.213655 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.213696 kubelet[2520]: E0120 00:53:54.213666 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.213977 kubelet[2520]: E0120 00:53:54.213947 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.213977 kubelet[2520]: W0120 00:53:54.213966 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.213977 kubelet[2520]: E0120 00:53:54.213975 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.214336 kubelet[2520]: E0120 00:53:54.214316 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.214336 kubelet[2520]: W0120 00:53:54.214333 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.214392 kubelet[2520]: E0120 00:53:54.214342 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.214689 kubelet[2520]: E0120 00:53:54.214668 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.214689 kubelet[2520]: W0120 00:53:54.214686 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.214750 kubelet[2520]: E0120 00:53:54.214694 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.215665 kubelet[2520]: E0120 00:53:54.215020 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.215665 kubelet[2520]: W0120 00:53:54.215035 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.215665 kubelet[2520]: E0120 00:53:54.215045 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.215665 kubelet[2520]: E0120 00:53:54.215396 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.215665 kubelet[2520]: W0120 00:53:54.215404 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.215665 kubelet[2520]: E0120 00:53:54.215447 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.215837 kubelet[2520]: E0120 00:53:54.215744 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.215837 kubelet[2520]: W0120 00:53:54.215754 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.215837 kubelet[2520]: E0120 00:53:54.215804 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.216185 kubelet[2520]: E0120 00:53:54.216161 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.216185 kubelet[2520]: W0120 00:53:54.216182 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.216242 kubelet[2520]: E0120 00:53:54.216192 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.216529 kubelet[2520]: E0120 00:53:54.216499 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.216529 kubelet[2520]: W0120 00:53:54.216511 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.216529 kubelet[2520]: E0120 00:53:54.216523 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.217019 kubelet[2520]: E0120 00:53:54.216986 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.217019 kubelet[2520]: W0120 00:53:54.217009 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.217241 kubelet[2520]: E0120 00:53:54.217021 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.220415 kubelet[2520]: E0120 00:53:54.220314 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.220415 kubelet[2520]: W0120 00:53:54.220337 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.220415 kubelet[2520]: E0120 00:53:54.220347 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.220881 kubelet[2520]: E0120 00:53:54.220837 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.220881 kubelet[2520]: W0120 00:53:54.220861 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.220881 kubelet[2520]: E0120 00:53:54.220872 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.221310 kubelet[2520]: E0120 00:53:54.221272 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.221310 kubelet[2520]: W0120 00:53:54.221294 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.221310 kubelet[2520]: E0120 00:53:54.221303 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.221694 kubelet[2520]: E0120 00:53:54.221628 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.221694 kubelet[2520]: W0120 00:53:54.221676 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.221694 kubelet[2520]: E0120 00:53:54.221687 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.221997 kubelet[2520]: E0120 00:53:54.221956 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.221997 kubelet[2520]: W0120 00:53:54.221980 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.221997 kubelet[2520]: E0120 00:53:54.221989 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.222327 kubelet[2520]: E0120 00:53:54.222284 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.222327 kubelet[2520]: W0120 00:53:54.222316 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.222327 kubelet[2520]: E0120 00:53:54.222325 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.222668 kubelet[2520]: E0120 00:53:54.222624 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.222668 kubelet[2520]: W0120 00:53:54.222662 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.222729 kubelet[2520]: E0120 00:53:54.222671 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.222955 kubelet[2520]: E0120 00:53:54.222931 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.222955 kubelet[2520]: W0120 00:53:54.222950 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.223027 kubelet[2520]: E0120 00:53:54.222959 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.223330 kubelet[2520]: E0120 00:53:54.223292 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.223330 kubelet[2520]: W0120 00:53:54.223310 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.223330 kubelet[2520]: E0120 00:53:54.223318 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.223760 kubelet[2520]: E0120 00:53:54.223723 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.223760 kubelet[2520]: W0120 00:53:54.223738 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.223760 kubelet[2520]: E0120 00:53:54.223747 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.224040 kubelet[2520]: E0120 00:53:54.224020 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.224040 kubelet[2520]: W0120 00:53:54.224033 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.224040 kubelet[2520]: E0120 00:53:54.224042 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.224374 kubelet[2520]: E0120 00:53:54.224351 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.224374 kubelet[2520]: W0120 00:53:54.224371 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.224422 kubelet[2520]: E0120 00:53:54.224380 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.224849 kubelet[2520]: E0120 00:53:54.224827 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.224849 kubelet[2520]: W0120 00:53:54.224846 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.224922 kubelet[2520]: E0120 00:53:54.224855 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.225230 kubelet[2520]: E0120 00:53:54.225187 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.225230 kubelet[2520]: W0120 00:53:54.225210 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.225230 kubelet[2520]: E0120 00:53:54.225229 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.225861 kubelet[2520]: E0120 00:53:54.225795 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.225861 kubelet[2520]: W0120 00:53:54.225808 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.225861 kubelet[2520]: E0120 00:53:54.225818 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.226155 kubelet[2520]: E0120 00:53:54.226133 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.226155 kubelet[2520]: W0120 00:53:54.226151 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.226203 kubelet[2520]: E0120 00:53:54.226160 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.226517 kubelet[2520]: E0120 00:53:54.226471 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.226517 kubelet[2520]: W0120 00:53:54.226502 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.226517 kubelet[2520]: E0120 00:53:54.226511 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.226823 kubelet[2520]: E0120 00:53:54.226781 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:53:54.226823 kubelet[2520]: W0120 00:53:54.226804 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:53:54.226823 kubelet[2520]: E0120 00:53:54.226812 2520 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:53:54.688420 containerd[1461]: time="2026-01-20T00:53:54.688357683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:54.689216 containerd[1461]: time="2026-01-20T00:53:54.689160048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 00:53:54.690311 containerd[1461]: time="2026-01-20T00:53:54.690256083Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:54.692574 containerd[1461]: time="2026-01-20T00:53:54.692530352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:54.693390 containerd[1461]: time="2026-01-20T00:53:54.693341285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 955.450708ms" Jan 20 00:53:54.693390 containerd[1461]: time="2026-01-20T00:53:54.693379958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 00:53:54.697466 containerd[1461]: time="2026-01-20T00:53:54.697423644Z" level=info msg="CreateContainer within sandbox \"d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 00:53:54.711444 containerd[1461]: time="2026-01-20T00:53:54.711385134Z" level=info msg="CreateContainer within sandbox \"d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177\"" Jan 20 00:53:54.711909 containerd[1461]: time="2026-01-20T00:53:54.711791993Z" level=info msg="StartContainer for \"9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177\"" Jan 20 00:53:54.748337 systemd[1]: Started cri-containerd-9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177.scope - libcontainer container 9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177. Jan 20 00:53:54.785475 containerd[1461]: time="2026-01-20T00:53:54.785389158Z" level=info msg="StartContainer for \"9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177\" returns successfully" Jan 20 00:53:54.794700 systemd[1]: cri-containerd-9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177.scope: Deactivated successfully. Jan 20 00:53:54.914814 containerd[1461]: time="2026-01-20T00:53:54.914579790Z" level=info msg="shim disconnected" id=9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177 namespace=k8s.io Jan 20 00:53:54.914814 containerd[1461]: time="2026-01-20T00:53:54.914774363Z" level=warning msg="cleaning up after shim disconnected" id=9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177 namespace=k8s.io Jan 20 00:53:54.914814 containerd[1461]: time="2026-01-20T00:53:54.914784842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:53:55.158032 kubelet[2520]: I0120 00:53:55.157989 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:53:55.159252 kubelet[2520]: E0120 00:53:55.158276 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:55.159252 kubelet[2520]: E0120 00:53:55.158768 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:55.159531 containerd[1461]: time="2026-01-20T00:53:55.159448518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 00:53:55.596750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9146aacce1cb8eb110ed80ea717a5c37a058be5942ce648076693dfbdafe0177-rootfs.mount: Deactivated successfully. Jan 20 00:53:56.098980 kubelet[2520]: E0120 00:53:56.098916 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:53:56.700964 containerd[1461]: time="2026-01-20T00:53:56.700902467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:56.703285 containerd[1461]: time="2026-01-20T00:53:56.703243603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 00:53:56.704355 containerd[1461]: time="2026-01-20T00:53:56.704291656Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:56.706788 containerd[1461]: time="2026-01-20T00:53:56.706696682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:53:56.707210 containerd[1461]: time="2026-01-20T00:53:56.707176662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.547698028s" Jan 20 00:53:56.707253 containerd[1461]: time="2026-01-20T00:53:56.707215854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 00:53:56.711841 containerd[1461]: time="2026-01-20T00:53:56.711796701Z" level=info msg="CreateContainer within sandbox \"d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:53:56.729061 containerd[1461]: time="2026-01-20T00:53:56.729005095Z" level=info msg="CreateContainer within sandbox \"d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7\"" Jan 20 00:53:56.729718 containerd[1461]: time="2026-01-20T00:53:56.729682758Z" level=info msg="StartContainer for \"50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7\"" Jan 20 00:53:56.777461 systemd[1]: Started cri-containerd-50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7.scope - libcontainer container 50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7. Jan 20 00:53:56.810994 containerd[1461]: time="2026-01-20T00:53:56.810813570Z" level=info msg="StartContainer for \"50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7\" returns successfully" Jan 20 00:53:57.163928 kubelet[2520]: E0120 00:53:57.163815 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:57.435395 systemd[1]: cri-containerd-50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7.scope: Deactivated successfully. Jan 20 00:53:57.458426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7-rootfs.mount: Deactivated successfully. Jan 20 00:53:57.478868 kubelet[2520]: I0120 00:53:57.478774 2520 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:53:57.519955 containerd[1461]: time="2026-01-20T00:53:57.519853753Z" level=info msg="shim disconnected" id=50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7 namespace=k8s.io Jan 20 00:53:57.519955 containerd[1461]: time="2026-01-20T00:53:57.519922662Z" level=warning msg="cleaning up after shim disconnected" id=50eadf8f75e6f357cac9941ac43fb7d800a8787bc4f0f6e309aad35a250c05e7 namespace=k8s.io Jan 20 00:53:57.519955 containerd[1461]: time="2026-01-20T00:53:57.519934494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:53:57.532275 systemd[1]: Created slice kubepods-besteffort-pode775ec00_cac2_4d62_a758_0e2d28913a84.slice - libcontainer container kubepods-besteffort-pode775ec00_cac2_4d62_a758_0e2d28913a84.slice. Jan 20 00:53:57.540745 systemd[1]: Created slice kubepods-burstable-podd8996315_c1bc_44a5_b42d_133ff549c4ad.slice - libcontainer container kubepods-burstable-podd8996315_c1bc_44a5_b42d_133ff549c4ad.slice. Jan 20 00:53:57.548609 kubelet[2520]: I0120 00:53:57.548566 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8996315-c1bc-44a5-b42d-133ff549c4ad-config-volume\") pod \"coredns-674b8bbfcf-d8bck\" (UID: \"d8996315-c1bc-44a5-b42d-133ff549c4ad\") " pod="kube-system/coredns-674b8bbfcf-d8bck" Jan 20 00:53:57.548749 kubelet[2520]: I0120 00:53:57.548615 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5673b085-3f3e-4250-ba9e-85fa33b4899b-config-volume\") pod \"coredns-674b8bbfcf-dnc6k\" (UID: \"5673b085-3f3e-4250-ba9e-85fa33b4899b\") " pod="kube-system/coredns-674b8bbfcf-dnc6k" Jan 20 00:53:57.548805 kubelet[2520]: I0120 00:53:57.548766 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e620ed7-8827-4f4a-b020-5c5456115c9e-goldmane-ca-bundle\") pod \"goldmane-666569f655-p8vbg\" (UID: \"7e620ed7-8827-4f4a-b020-5c5456115c9e\") " pod="calico-system/goldmane-666569f655-p8vbg" Jan 20 00:53:57.548848 kubelet[2520]: I0120 00:53:57.548810 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hxn\" (UniqueName: \"kubernetes.io/projected/7e620ed7-8827-4f4a-b020-5c5456115c9e-kube-api-access-j9hxn\") pod \"goldmane-666569f655-p8vbg\" (UID: \"7e620ed7-8827-4f4a-b020-5c5456115c9e\") " pod="calico-system/goldmane-666569f655-p8vbg" Jan 20 00:53:57.548848 kubelet[2520]: I0120 00:53:57.548839 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4z6\" (UniqueName: \"kubernetes.io/projected/d8996315-c1bc-44a5-b42d-133ff549c4ad-kube-api-access-jn4z6\") pod \"coredns-674b8bbfcf-d8bck\" (UID: \"d8996315-c1bc-44a5-b42d-133ff549c4ad\") " pod="kube-system/coredns-674b8bbfcf-d8bck" Jan 20 00:53:57.548894 kubelet[2520]: I0120 00:53:57.548870 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2wd\" (UniqueName: \"kubernetes.io/projected/5673b085-3f3e-4250-ba9e-85fa33b4899b-kube-api-access-vz2wd\") pod \"coredns-674b8bbfcf-dnc6k\" (UID: \"5673b085-3f3e-4250-ba9e-85fa33b4899b\") " pod="kube-system/coredns-674b8bbfcf-dnc6k" Jan 20 00:53:57.548918 kubelet[2520]: I0120 00:53:57.548900 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38bfb1da-e948-47f5-8ec0-b14e509cc2d8-tigera-ca-bundle\") pod \"calico-kube-controllers-6667b974c7-vm9zj\" (UID: \"38bfb1da-e948-47f5-8ec0-b14e509cc2d8\") " pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" Jan 20 00:53:57.549211 kubelet[2520]: I0120 00:53:57.548919 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gxl5\" (UniqueName: \"kubernetes.io/projected/84cb4cb2-d928-4fed-bf18-3918ea335ce0-kube-api-access-9gxl5\") pod \"calico-apiserver-54d67dfb4-f8mz9\" (UID: \"84cb4cb2-d928-4fed-bf18-3918ea335ce0\") " pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" Jan 20 00:53:57.549211 kubelet[2520]: I0120 00:53:57.548996 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-backend-key-pair\") pod \"whisker-6f9877bdc9-7cbwv\" (UID: \"e775ec00-cac2-4d62-a758-0e2d28913a84\") " pod="calico-system/whisker-6f9877bdc9-7cbwv" Jan 20 00:53:57.549211 kubelet[2520]: I0120 00:53:57.549029 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-ca-bundle\") pod \"whisker-6f9877bdc9-7cbwv\" (UID: \"e775ec00-cac2-4d62-a758-0e2d28913a84\") " pod="calico-system/whisker-6f9877bdc9-7cbwv" Jan 20 00:53:57.549211 kubelet[2520]: I0120 00:53:57.549053 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj98c\" (UniqueName: \"kubernetes.io/projected/e775ec00-cac2-4d62-a758-0e2d28913a84-kube-api-access-mj98c\") pod \"whisker-6f9877bdc9-7cbwv\" (UID: \"e775ec00-cac2-4d62-a758-0e2d28913a84\") " pod="calico-system/whisker-6f9877bdc9-7cbwv" Jan 20 00:53:57.550040 kubelet[2520]: I0120 00:53:57.549070 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kjg\" (UniqueName: \"kubernetes.io/projected/38bfb1da-e948-47f5-8ec0-b14e509cc2d8-kube-api-access-z9kjg\") pod \"calico-kube-controllers-6667b974c7-vm9zj\" (UID: \"38bfb1da-e948-47f5-8ec0-b14e509cc2d8\") " pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" Jan 20 00:53:57.550040 kubelet[2520]: I0120 00:53:57.549247 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84cb4cb2-d928-4fed-bf18-3918ea335ce0-calico-apiserver-certs\") pod \"calico-apiserver-54d67dfb4-f8mz9\" (UID: \"84cb4cb2-d928-4fed-bf18-3918ea335ce0\") " pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" Jan 20 00:53:57.550040 kubelet[2520]: I0120 00:53:57.549265 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7e620ed7-8827-4f4a-b020-5c5456115c9e-goldmane-key-pair\") pod \"goldmane-666569f655-p8vbg\" (UID: \"7e620ed7-8827-4f4a-b020-5c5456115c9e\") " pod="calico-system/goldmane-666569f655-p8vbg" Jan 20 00:53:57.550040 kubelet[2520]: I0120 00:53:57.549281 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e620ed7-8827-4f4a-b020-5c5456115c9e-config\") pod \"goldmane-666569f655-p8vbg\" (UID: \"7e620ed7-8827-4f4a-b020-5c5456115c9e\") " pod="calico-system/goldmane-666569f655-p8vbg" Jan 20 00:53:57.552051 systemd[1]: Created slice kubepods-besteffort-pod38bfb1da_e948_47f5_8ec0_b14e509cc2d8.slice - libcontainer container kubepods-besteffort-pod38bfb1da_e948_47f5_8ec0_b14e509cc2d8.slice. Jan 20 00:53:57.561755 systemd[1]: Created slice kubepods-besteffort-pod0d6351d0_021b_40ba_9cae_6912429b9dd9.slice - libcontainer container kubepods-besteffort-pod0d6351d0_021b_40ba_9cae_6912429b9dd9.slice. Jan 20 00:53:57.569069 systemd[1]: Created slice kubepods-besteffort-pod84cb4cb2_d928_4fed_bf18_3918ea335ce0.slice - libcontainer container kubepods-besteffort-pod84cb4cb2_d928_4fed_bf18_3918ea335ce0.slice. Jan 20 00:53:57.578606 systemd[1]: Created slice kubepods-burstable-pod5673b085_3f3e_4250_ba9e_85fa33b4899b.slice - libcontainer container kubepods-burstable-pod5673b085_3f3e_4250_ba9e_85fa33b4899b.slice. Jan 20 00:53:57.587070 systemd[1]: Created slice kubepods-besteffort-pod7e620ed7_8827_4f4a_b020_5c5456115c9e.slice - libcontainer container kubepods-besteffort-pod7e620ed7_8827_4f4a_b020_5c5456115c9e.slice. Jan 20 00:53:57.649968 kubelet[2520]: I0120 00:53:57.649878 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2xc\" (UniqueName: \"kubernetes.io/projected/0d6351d0-021b-40ba-9cae-6912429b9dd9-kube-api-access-wt2xc\") pod \"calico-apiserver-54d67dfb4-4n657\" (UID: \"0d6351d0-021b-40ba-9cae-6912429b9dd9\") " pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" Jan 20 00:53:57.649968 kubelet[2520]: I0120 00:53:57.649970 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d6351d0-021b-40ba-9cae-6912429b9dd9-calico-apiserver-certs\") pod \"calico-apiserver-54d67dfb4-4n657\" (UID: \"0d6351d0-021b-40ba-9cae-6912429b9dd9\") " pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" Jan 20 00:53:57.838626 containerd[1461]: time="2026-01-20T00:53:57.838566581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9877bdc9-7cbwv,Uid:e775ec00-cac2-4d62-a758-0e2d28913a84,Namespace:calico-system,Attempt:0,}" Jan 20 00:53:57.849114 kubelet[2520]: E0120 00:53:57.849013 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:57.850472 containerd[1461]: time="2026-01-20T00:53:57.850312206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d8bck,Uid:d8996315-c1bc-44a5-b42d-133ff549c4ad,Namespace:kube-system,Attempt:0,}" Jan 20 00:53:57.868358 containerd[1461]: time="2026-01-20T00:53:57.868296142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-4n657,Uid:0d6351d0-021b-40ba-9cae-6912429b9dd9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:53:57.868589 containerd[1461]: time="2026-01-20T00:53:57.868552100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6667b974c7-vm9zj,Uid:38bfb1da-e948-47f5-8ec0-b14e509cc2d8,Namespace:calico-system,Attempt:0,}" Jan 20 00:53:57.875391 containerd[1461]: time="2026-01-20T00:53:57.875002485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-f8mz9,Uid:84cb4cb2-d928-4fed-bf18-3918ea335ce0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:53:57.884328 kubelet[2520]: E0120 00:53:57.884059 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:57.887142 containerd[1461]: time="2026-01-20T00:53:57.887107875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dnc6k,Uid:5673b085-3f3e-4250-ba9e-85fa33b4899b,Namespace:kube-system,Attempt:0,}" Jan 20 00:53:57.891455 containerd[1461]: time="2026-01-20T00:53:57.891367644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p8vbg,Uid:7e620ed7-8827-4f4a-b020-5c5456115c9e,Namespace:calico-system,Attempt:0,}" Jan 20 00:53:58.079495 containerd[1461]: time="2026-01-20T00:53:58.079348786Z" level=error msg="Failed to destroy network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.084560 containerd[1461]: time="2026-01-20T00:53:58.084493220Z" level=error msg="encountered an error cleaning up failed sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.086306 containerd[1461]: time="2026-01-20T00:53:58.085150948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p8vbg,Uid:7e620ed7-8827-4f4a-b020-5c5456115c9e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.091876 containerd[1461]: time="2026-01-20T00:53:58.088920891Z" level=error msg="Failed to destroy network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.091876 containerd[1461]: time="2026-01-20T00:53:58.090243048Z" level=error msg="encountered an error cleaning up failed sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.091876 containerd[1461]: time="2026-01-20T00:53:58.090286869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d8bck,Uid:d8996315-c1bc-44a5-b42d-133ff549c4ad,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.096958 containerd[1461]: time="2026-01-20T00:53:58.096219875Z" level=error msg="Failed to destroy network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.096958 containerd[1461]: time="2026-01-20T00:53:58.096603750Z" level=error msg="encountered an error cleaning up failed sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.096958 containerd[1461]: time="2026-01-20T00:53:58.096679502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9877bdc9-7cbwv,Uid:e775ec00-cac2-4d62-a758-0e2d28913a84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.101404 kubelet[2520]: E0120 00:53:58.101346 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.101472 kubelet[2520]: E0120 00:53:58.101437 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p8vbg" Jan 20 00:53:58.101504 kubelet[2520]: E0120 00:53:58.101479 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p8vbg" Jan 20 00:53:58.101575 kubelet[2520]: E0120 00:53:58.101524 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-p8vbg_calico-system(7e620ed7-8827-4f4a-b020-5c5456115c9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-p8vbg_calico-system(7e620ed7-8827-4f4a-b020-5c5456115c9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:53:58.101709 kubelet[2520]: E0120 00:53:58.101549 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.101799 kubelet[2520]: E0120 00:53:58.101783 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d8bck" Jan 20 00:53:58.101859 kubelet[2520]: E0120 00:53:58.101846 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d8bck" Jan 20 00:53:58.101968 kubelet[2520]: E0120 00:53:58.101943 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-d8bck_kube-system(d8996315-c1bc-44a5-b42d-133ff549c4ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-d8bck_kube-system(d8996315-c1bc-44a5-b42d-133ff549c4ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d8bck" podUID="d8996315-c1bc-44a5-b42d-133ff549c4ad" Jan 20 00:53:58.104286 kubelet[2520]: E0120 00:53:58.103063 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.104378 kubelet[2520]: E0120 00:53:58.104324 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9877bdc9-7cbwv" Jan 20 00:53:58.104378 kubelet[2520]: E0120 00:53:58.104364 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9877bdc9-7cbwv" Jan 20 00:53:58.104495 kubelet[2520]: E0120 00:53:58.104418 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f9877bdc9-7cbwv_calico-system(e775ec00-cac2-4d62-a758-0e2d28913a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f9877bdc9-7cbwv_calico-system(e775ec00-cac2-4d62-a758-0e2d28913a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f9877bdc9-7cbwv" podUID="e775ec00-cac2-4d62-a758-0e2d28913a84" Jan 20 00:53:58.106535 systemd[1]: Created slice kubepods-besteffort-pod2f383b2b_693c_42c3_b0a3_10cbb7e70071.slice - libcontainer container kubepods-besteffort-pod2f383b2b_693c_42c3_b0a3_10cbb7e70071.slice. Jan 20 00:53:58.108100 containerd[1461]: time="2026-01-20T00:53:58.108014113Z" level=error msg="Failed to destroy network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.109051 containerd[1461]: time="2026-01-20T00:53:58.109008918Z" level=error msg="encountered an error cleaning up failed sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.109181 containerd[1461]: time="2026-01-20T00:53:58.109136857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dnc6k,Uid:5673b085-3f3e-4250-ba9e-85fa33b4899b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.109506 kubelet[2520]: E0120 00:53:58.109456 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.109543 kubelet[2520]: E0120 00:53:58.109507 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dnc6k" Jan 20 00:53:58.109543 kubelet[2520]: E0120 00:53:58.109523 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dnc6k" Jan 20 00:53:58.109595 kubelet[2520]: E0120 00:53:58.109571 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dnc6k_kube-system(5673b085-3f3e-4250-ba9e-85fa33b4899b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dnc6k_kube-system(5673b085-3f3e-4250-ba9e-85fa33b4899b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dnc6k" podUID="5673b085-3f3e-4250-ba9e-85fa33b4899b" Jan 20 00:53:58.110465 containerd[1461]: time="2026-01-20T00:53:58.110379454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hm58c,Uid:2f383b2b-693c-42c3-b0a3-10cbb7e70071,Namespace:calico-system,Attempt:0,}" Jan 20 00:53:58.119930 containerd[1461]: time="2026-01-20T00:53:58.119886624Z" level=error msg="Failed to destroy network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.120376 containerd[1461]: time="2026-01-20T00:53:58.120314754Z" level=error msg="encountered an error cleaning up failed sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.120418 containerd[1461]: time="2026-01-20T00:53:58.120376529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-4n657,Uid:0d6351d0-021b-40ba-9cae-6912429b9dd9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.120574 kubelet[2520]: E0120 00:53:58.120539 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.120614 kubelet[2520]: E0120 00:53:58.120594 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" Jan 20 00:53:58.120667 kubelet[2520]: E0120 00:53:58.120613 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" Jan 20 00:53:58.120691 kubelet[2520]: E0120 00:53:58.120675 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54d67dfb4-4n657_calico-apiserver(0d6351d0-021b-40ba-9cae-6912429b9dd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54d67dfb4-4n657_calico-apiserver(0d6351d0-021b-40ba-9cae-6912429b9dd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:53:58.122510 containerd[1461]: time="2026-01-20T00:53:58.122394137Z" level=error msg="Failed to destroy network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.123002 containerd[1461]: time="2026-01-20T00:53:58.122930417Z" level=error msg="encountered an error cleaning up failed sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.123002 containerd[1461]: time="2026-01-20T00:53:58.122996240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6667b974c7-vm9zj,Uid:38bfb1da-e948-47f5-8ec0-b14e509cc2d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.123319 kubelet[2520]: E0120 00:53:58.123244 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.123319 kubelet[2520]: E0120 00:53:58.123285 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" Jan 20 00:53:58.123319 kubelet[2520]: E0120 00:53:58.123300 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" Jan 20 00:53:58.123410 kubelet[2520]: E0120 00:53:58.123332 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6667b974c7-vm9zj_calico-system(38bfb1da-e948-47f5-8ec0-b14e509cc2d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6667b974c7-vm9zj_calico-system(38bfb1da-e948-47f5-8ec0-b14e509cc2d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" podUID="38bfb1da-e948-47f5-8ec0-b14e509cc2d8" Jan 20 00:53:58.140434 containerd[1461]: time="2026-01-20T00:53:58.140374033Z" level=error msg="Failed to destroy network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.141424 containerd[1461]: time="2026-01-20T00:53:58.141381023Z" level=error msg="encountered an error cleaning up failed sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.141506 containerd[1461]: time="2026-01-20T00:53:58.141440023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-f8mz9,Uid:84cb4cb2-d928-4fed-bf18-3918ea335ce0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.141720 kubelet[2520]: E0120 00:53:58.141663 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.141720 kubelet[2520]: E0120 00:53:58.141715 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" Jan 20 00:53:58.141793 kubelet[2520]: E0120 00:53:58.141734 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" Jan 20 00:53:58.141822 kubelet[2520]: E0120 00:53:58.141784 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54d67dfb4-f8mz9_calico-apiserver(84cb4cb2-d928-4fed-bf18-3918ea335ce0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54d67dfb4-f8mz9_calico-apiserver(84cb4cb2-d928-4fed-bf18-3918ea335ce0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:53:58.166406 kubelet[2520]: I0120 00:53:58.166377 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:53:58.168458 kubelet[2520]: I0120 00:53:58.168222 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:53:58.169697 containerd[1461]: time="2026-01-20T00:53:58.169607170Z" level=info msg="StopPodSandbox for \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\"" Jan 20 00:53:58.171343 kubelet[2520]: I0120 00:53:58.170277 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:53:58.171385 containerd[1461]: time="2026-01-20T00:53:58.170715403Z" level=info msg="StopPodSandbox for \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\"" Jan 20 00:53:58.171385 containerd[1461]: time="2026-01-20T00:53:58.170734413Z" level=info msg="StopPodSandbox for \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\"" Jan 20 00:53:58.171439 containerd[1461]: time="2026-01-20T00:53:58.171399413Z" level=info msg="Ensure that sandbox bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01 in task-service has been cleanup successfully" Jan 20 00:53:58.171594 containerd[1461]: time="2026-01-20T00:53:58.171516997Z" level=info msg="Ensure that sandbox 7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a in task-service has been cleanup successfully" Jan 20 00:53:58.171842 containerd[1461]: time="2026-01-20T00:53:58.171823764Z" level=info msg="Ensure that sandbox 93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1 in task-service has been cleanup successfully" Jan 20 00:53:58.176524 kubelet[2520]: E0120 00:53:58.176476 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:53:58.179800 kubelet[2520]: I0120 00:53:58.179743 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:53:58.180229 containerd[1461]: time="2026-01-20T00:53:58.180201339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 00:53:58.180674 containerd[1461]: time="2026-01-20T00:53:58.180626723Z" level=info msg="StopPodSandbox for \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\"" Jan 20 00:53:58.180884 containerd[1461]: time="2026-01-20T00:53:58.180868082Z" level=info msg="Ensure that sandbox 4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a in task-service has been cleanup successfully" Jan 20 00:53:58.192166 kubelet[2520]: I0120 00:53:58.192114 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:53:58.193650 containerd[1461]: time="2026-01-20T00:53:58.193529297Z" level=info msg="StopPodSandbox for \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\"" Jan 20 00:53:58.193763 containerd[1461]: time="2026-01-20T00:53:58.193715392Z" level=info msg="Ensure that sandbox 391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58 in task-service has been cleanup successfully" Jan 20 00:53:58.195132 kubelet[2520]: I0120 00:53:58.194956 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:53:58.197121 containerd[1461]: time="2026-01-20T00:53:58.197045214Z" level=info msg="StopPodSandbox for \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\"" Jan 20 00:53:58.202893 containerd[1461]: time="2026-01-20T00:53:58.202753780Z" level=info msg="Ensure that sandbox c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44 in task-service has been cleanup successfully" Jan 20 00:53:58.202936 kubelet[2520]: I0120 00:53:58.202832 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:53:58.203623 containerd[1461]: time="2026-01-20T00:53:58.203495819Z" level=info msg="StopPodSandbox for \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\"" Jan 20 00:53:58.204989 containerd[1461]: time="2026-01-20T00:53:58.204369279Z" level=info msg="Ensure that sandbox 82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0 in task-service has been cleanup successfully" Jan 20 00:53:58.210361 containerd[1461]: time="2026-01-20T00:53:58.210316548Z" level=error msg="Failed to destroy network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.211377 containerd[1461]: time="2026-01-20T00:53:58.211342631Z" level=error msg="encountered an error cleaning up failed sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.211478 containerd[1461]: time="2026-01-20T00:53:58.211406921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hm58c,Uid:2f383b2b-693c-42c3-b0a3-10cbb7e70071,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.211742 kubelet[2520]: E0120 00:53:58.211710 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.211784 kubelet[2520]: E0120 00:53:58.211768 2520 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hm58c" Jan 20 00:53:58.211810 kubelet[2520]: E0120 00:53:58.211790 2520 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hm58c" Jan 20 00:53:58.211860 kubelet[2520]: E0120 00:53:58.211834 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:53:58.237429 containerd[1461]: time="2026-01-20T00:53:58.237359045Z" level=error msg="StopPodSandbox for \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\" failed" error="failed to destroy network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.237781 kubelet[2520]: E0120 00:53:58.237721 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:53:58.237824 kubelet[2520]: E0120 00:53:58.237791 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01"} Jan 20 00:53:58.237888 kubelet[2520]: E0120 00:53:58.237844 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5673b085-3f3e-4250-ba9e-85fa33b4899b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:58.237888 kubelet[2520]: E0120 00:53:58.237868 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5673b085-3f3e-4250-ba9e-85fa33b4899b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dnc6k" podUID="5673b085-3f3e-4250-ba9e-85fa33b4899b" Jan 20 00:53:58.238467 containerd[1461]: time="2026-01-20T00:53:58.238406507Z" level=error msg="StopPodSandbox for \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\" failed" error="failed to destroy network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.238728 kubelet[2520]: E0120 00:53:58.238669 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:53:58.238728 kubelet[2520]: E0120 00:53:58.238720 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1"} Jan 20 00:53:58.238786 kubelet[2520]: E0120 00:53:58.238741 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84cb4cb2-d928-4fed-bf18-3918ea335ce0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:58.238786 kubelet[2520]: E0120 00:53:58.238757 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84cb4cb2-d928-4fed-bf18-3918ea335ce0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:53:58.244466 containerd[1461]: time="2026-01-20T00:53:58.244319416Z" level=error msg="StopPodSandbox for \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\" failed" error="failed to destroy network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.245460 kubelet[2520]: E0120 00:53:58.244896 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:53:58.245460 kubelet[2520]: E0120 00:53:58.244947 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a"} Jan 20 00:53:58.245460 kubelet[2520]: E0120 00:53:58.244979 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e620ed7-8827-4f4a-b020-5c5456115c9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:58.245460 kubelet[2520]: E0120 00:53:58.245008 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e620ed7-8827-4f4a-b020-5c5456115c9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:53:58.251032 containerd[1461]: time="2026-01-20T00:53:58.250939302Z" level=error msg="StopPodSandbox for \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\" failed" error="failed to destroy network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.251329 kubelet[2520]: E0120 00:53:58.251285 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:53:58.251329 kubelet[2520]: E0120 00:53:58.251320 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0"} Jan 20 00:53:58.251434 kubelet[2520]: E0120 00:53:58.251342 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8996315-c1bc-44a5-b42d-133ff549c4ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:58.251434 kubelet[2520]: E0120 00:53:58.251360 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8996315-c1bc-44a5-b42d-133ff549c4ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d8bck" podUID="d8996315-c1bc-44a5-b42d-133ff549c4ad" Jan 20 00:53:58.251702 containerd[1461]: time="2026-01-20T00:53:58.251595797Z" level=error msg="StopPodSandbox for \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\" failed" error="failed to destroy network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.251888 kubelet[2520]: E0120 00:53:58.251856 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:53:58.251927 kubelet[2520]: E0120 00:53:58.251890 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a"} Jan 20 00:53:58.251927 kubelet[2520]: E0120 00:53:58.251908 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e775ec00-cac2-4d62-a758-0e2d28913a84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:58.251998 kubelet[2520]: E0120 00:53:58.251924 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e775ec00-cac2-4d62-a758-0e2d28913a84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f9877bdc9-7cbwv" podUID="e775ec00-cac2-4d62-a758-0e2d28913a84" Jan 20 00:53:58.257813 containerd[1461]: time="2026-01-20T00:53:58.257695338Z" level=error msg="StopPodSandbox for \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\" failed" error="failed to destroy network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.257965 kubelet[2520]: E0120 00:53:58.257921 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:53:58.257996 kubelet[2520]: E0120 00:53:58.257967 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44"} Jan 20 00:53:58.257996 kubelet[2520]: E0120 00:53:58.257989 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d6351d0-021b-40ba-9cae-6912429b9dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:58.258069 kubelet[2520]: E0120 00:53:58.258008 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d6351d0-021b-40ba-9cae-6912429b9dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:53:58.260113 containerd[1461]: time="2026-01-20T00:53:58.260017150Z" level=error msg="StopPodSandbox for \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\" failed" error="failed to destroy network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:58.260321 kubelet[2520]: E0120 00:53:58.260286 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:53:58.260369 kubelet[2520]: E0120 00:53:58.260326 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58"} Jan 20 00:53:58.260369 kubelet[2520]: E0120 00:53:58.260352 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38bfb1da-e948-47f5-8ec0-b14e509cc2d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:58.260462 kubelet[2520]: E0120 00:53:58.260369 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38bfb1da-e948-47f5-8ec0-b14e509cc2d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" podUID="38bfb1da-e948-47f5-8ec0-b14e509cc2d8" Jan 20 00:53:58.725857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a-shm.mount: Deactivated successfully. Jan 20 00:53:59.207321 kubelet[2520]: I0120 00:53:59.207269 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:53:59.209235 containerd[1461]: time="2026-01-20T00:53:59.209207374Z" level=info msg="StopPodSandbox for \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\"" Jan 20 00:53:59.209480 containerd[1461]: time="2026-01-20T00:53:59.209368154Z" level=info msg="Ensure that sandbox 271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea in task-service has been cleanup successfully" Jan 20 00:53:59.240328 containerd[1461]: time="2026-01-20T00:53:59.240284543Z" level=error msg="StopPodSandbox for \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\" failed" error="failed to destroy network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:53:59.240754 kubelet[2520]: E0120 00:53:59.240709 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:53:59.240822 kubelet[2520]: E0120 00:53:59.240765 2520 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea"} Jan 20 00:53:59.240822 kubelet[2520]: E0120 00:53:59.240797 2520 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f383b2b-693c-42c3-b0a3-10cbb7e70071\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:53:59.240917 kubelet[2520]: E0120 00:53:59.240820 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f383b2b-693c-42c3-b0a3-10cbb7e70071\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:54:03.696956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000282141.mount: Deactivated successfully. Jan 20 00:54:03.926994 containerd[1461]: time="2026-01-20T00:54:03.926943261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:03.927873 containerd[1461]: time="2026-01-20T00:54:03.927833413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 00:54:03.929195 containerd[1461]: time="2026-01-20T00:54:03.929131450Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:03.931370 containerd[1461]: time="2026-01-20T00:54:03.931312193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:03.931977 containerd[1461]: time="2026-01-20T00:54:03.931945872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.751577682s" Jan 20 00:54:03.932021 containerd[1461]: time="2026-01-20T00:54:03.931980777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 00:54:03.943922 containerd[1461]: time="2026-01-20T00:54:03.943877638Z" level=info msg="CreateContainer within sandbox \"d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 00:54:03.960397 containerd[1461]: time="2026-01-20T00:54:03.960295552Z" level=info msg="CreateContainer within sandbox \"d7b723cbf1e2b458c5b01b08a220cf7b47c046c394f1415f02f4157223bbe439\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"45c7fc013e1b7716c8dfa8ade80d89be3de3e4f4f432fa13e0907f4ff1551dc5\"" Jan 20 00:54:03.960835 containerd[1461]: time="2026-01-20T00:54:03.960807032Z" level=info msg="StartContainer for \"45c7fc013e1b7716c8dfa8ade80d89be3de3e4f4f432fa13e0907f4ff1551dc5\"" Jan 20 00:54:04.020256 systemd[1]: Started cri-containerd-45c7fc013e1b7716c8dfa8ade80d89be3de3e4f4f432fa13e0907f4ff1551dc5.scope - libcontainer container 45c7fc013e1b7716c8dfa8ade80d89be3de3e4f4f432fa13e0907f4ff1551dc5. Jan 20 00:54:04.053251 containerd[1461]: time="2026-01-20T00:54:04.053206298Z" level=info msg="StartContainer for \"45c7fc013e1b7716c8dfa8ade80d89be3de3e4f4f432fa13e0907f4ff1551dc5\" returns successfully" Jan 20 00:54:04.141739 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 00:54:04.141826 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 00:54:04.219287 containerd[1461]: time="2026-01-20T00:54:04.218989422Z" level=info msg="StopPodSandbox for \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\"" Jan 20 00:54:04.230903 kubelet[2520]: E0120 00:54:04.230691 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:04.302652 kubelet[2520]: I0120 00:54:04.302553 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qw9tp" podStartSLOduration=1.311683867 podStartE2EDuration="12.302537216s" podCreationTimestamp="2026-01-20 00:53:52 +0000 UTC" firstStartedPulling="2026-01-20 00:53:52.941906717 +0000 UTC m=+17.934670244" lastFinishedPulling="2026-01-20 00:54:03.932760066 +0000 UTC m=+28.925523593" observedRunningTime="2026-01-20 00:54:04.267241174 +0000 UTC m=+29.260004701" watchObservedRunningTime="2026-01-20 00:54:04.302537216 +0000 UTC m=+29.295300744" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.302 [INFO][3835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.303 [INFO][3835] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" iface="eth0" netns="/var/run/netns/cni-141e4a49-db1f-dfba-c903-783c92288f19" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.303 [INFO][3835] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" iface="eth0" netns="/var/run/netns/cni-141e4a49-db1f-dfba-c903-783c92288f19" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.304 [INFO][3835] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" iface="eth0" netns="/var/run/netns/cni-141e4a49-db1f-dfba-c903-783c92288f19" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.304 [INFO][3835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.304 [INFO][3835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.396 [INFO][3849] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.397 [INFO][3849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.397 [INFO][3849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.408 [WARNING][3849] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.408 [INFO][3849] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.412 [INFO][3849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:04.417615 containerd[1461]: 2026-01-20 00:54:04.415 [INFO][3835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:04.418225 containerd[1461]: time="2026-01-20T00:54:04.417923721Z" level=info msg="TearDown network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\" successfully" Jan 20 00:54:04.418225 containerd[1461]: time="2026-01-20T00:54:04.417955431Z" level=info msg="StopPodSandbox for \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\" returns successfully" Jan 20 00:54:04.499786 kubelet[2520]: I0120 00:54:04.499650 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-ca-bundle\") pod \"e775ec00-cac2-4d62-a758-0e2d28913a84\" (UID: \"e775ec00-cac2-4d62-a758-0e2d28913a84\") " Jan 20 00:54:04.499786 kubelet[2520]: I0120 00:54:04.499703 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj98c\" (UniqueName: \"kubernetes.io/projected/e775ec00-cac2-4d62-a758-0e2d28913a84-kube-api-access-mj98c\") pod \"e775ec00-cac2-4d62-a758-0e2d28913a84\" (UID: \"e775ec00-cac2-4d62-a758-0e2d28913a84\") " Jan 20 00:54:04.499786 kubelet[2520]: I0120 00:54:04.499730 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-backend-key-pair\") pod \"e775ec00-cac2-4d62-a758-0e2d28913a84\" (UID: \"e775ec00-cac2-4d62-a758-0e2d28913a84\") " Jan 20 00:54:04.500475 kubelet[2520]: I0120 00:54:04.500416 2520 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e775ec00-cac2-4d62-a758-0e2d28913a84" (UID: "e775ec00-cac2-4d62-a758-0e2d28913a84"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:54:04.506140 kubelet[2520]: I0120 00:54:04.506038 2520 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e775ec00-cac2-4d62-a758-0e2d28913a84" (UID: "e775ec00-cac2-4d62-a758-0e2d28913a84"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:54:04.506265 kubelet[2520]: I0120 00:54:04.506204 2520 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e775ec00-cac2-4d62-a758-0e2d28913a84-kube-api-access-mj98c" (OuterVolumeSpecName: "kube-api-access-mj98c") pod "e775ec00-cac2-4d62-a758-0e2d28913a84" (UID: "e775ec00-cac2-4d62-a758-0e2d28913a84"). InnerVolumeSpecName "kube-api-access-mj98c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:54:04.600766 kubelet[2520]: I0120 00:54:04.600651 2520 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mj98c\" (UniqueName: \"kubernetes.io/projected/e775ec00-cac2-4d62-a758-0e2d28913a84-kube-api-access-mj98c\") on node \"localhost\" DevicePath \"\"" Jan 20 00:54:04.600766 kubelet[2520]: I0120 00:54:04.600727 2520 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 00:54:04.600766 kubelet[2520]: I0120 00:54:04.600751 2520 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e775ec00-cac2-4d62-a758-0e2d28913a84-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 00:54:04.699361 systemd[1]: run-netns-cni\x2d141e4a49\x2ddb1f\x2ddfba\x2dc903\x2d783c92288f19.mount: Deactivated successfully. Jan 20 00:54:04.699494 systemd[1]: var-lib-kubelet-pods-e775ec00\x2dcac2\x2d4d62\x2da758\x2d0e2d28913a84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmj98c.mount: Deactivated successfully. Jan 20 00:54:04.699606 systemd[1]: var-lib-kubelet-pods-e775ec00\x2dcac2\x2d4d62\x2da758\x2d0e2d28913a84-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 00:54:05.105227 systemd[1]: Removed slice kubepods-besteffort-pode775ec00_cac2_4d62_a758_0e2d28913a84.slice - libcontainer container kubepods-besteffort-pode775ec00_cac2_4d62_a758_0e2d28913a84.slice. Jan 20 00:54:05.232384 kubelet[2520]: I0120 00:54:05.232317 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:54:05.233425 kubelet[2520]: E0120 00:54:05.232954 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:05.297413 systemd[1]: Created slice kubepods-besteffort-poddb92007e_a07a_40c5_aa7a_9a7981e0ad4e.slice - libcontainer container kubepods-besteffort-poddb92007e_a07a_40c5_aa7a_9a7981e0ad4e.slice. Jan 20 00:54:05.303891 kubelet[2520]: I0120 00:54:05.303849 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db92007e-a07a-40c5-aa7a-9a7981e0ad4e-whisker-ca-bundle\") pod \"whisker-775dbdb97f-rmk2w\" (UID: \"db92007e-a07a-40c5-aa7a-9a7981e0ad4e\") " pod="calico-system/whisker-775dbdb97f-rmk2w" Jan 20 00:54:05.303891 kubelet[2520]: I0120 00:54:05.303891 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db92007e-a07a-40c5-aa7a-9a7981e0ad4e-whisker-backend-key-pair\") pod \"whisker-775dbdb97f-rmk2w\" (UID: \"db92007e-a07a-40c5-aa7a-9a7981e0ad4e\") " pod="calico-system/whisker-775dbdb97f-rmk2w" Jan 20 00:54:05.304069 kubelet[2520]: I0120 00:54:05.303908 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4fzt\" (UniqueName: \"kubernetes.io/projected/db92007e-a07a-40c5-aa7a-9a7981e0ad4e-kube-api-access-x4fzt\") pod \"whisker-775dbdb97f-rmk2w\" (UID: \"db92007e-a07a-40c5-aa7a-9a7981e0ad4e\") " pod="calico-system/whisker-775dbdb97f-rmk2w" Jan 20 00:54:05.601300 containerd[1461]: time="2026-01-20T00:54:05.601249982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775dbdb97f-rmk2w,Uid:db92007e-a07a-40c5-aa7a-9a7981e0ad4e,Namespace:calico-system,Attempt:0,}" Jan 20 00:54:05.730700 systemd-networkd[1384]: cali5eb965d9305: Link UP Jan 20 00:54:05.731191 systemd-networkd[1384]: cali5eb965d9305: Gained carrier Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.653 [INFO][3974] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.666 [INFO][3974] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--775dbdb97f--rmk2w-eth0 whisker-775dbdb97f- calico-system db92007e-a07a-40c5-aa7a-9a7981e0ad4e 901 0 2026-01-20 00:54:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:775dbdb97f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-775dbdb97f-rmk2w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5eb965d9305 [] [] }} ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.666 [INFO][3974] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.689 [INFO][3990] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" HandleID="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Workload="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.689 [INFO][3990] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" HandleID="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Workload="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-775dbdb97f-rmk2w", "timestamp":"2026-01-20 00:54:05.689563959 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.689 [INFO][3990] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.690 [INFO][3990] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.690 [INFO][3990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.696 [INFO][3990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.701 [INFO][3990] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.705 [INFO][3990] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.707 [INFO][3990] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.709 [INFO][3990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.709 [INFO][3990] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.710 [INFO][3990] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.715 [INFO][3990] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.719 [INFO][3990] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.719 [INFO][3990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" host="localhost" Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.719 [INFO][3990] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:05.747722 containerd[1461]: 2026-01-20 00:54:05.719 [INFO][3990] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" HandleID="k8s-pod-network.aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Workload="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" Jan 20 00:54:05.748315 containerd[1461]: 2026-01-20 00:54:05.722 [INFO][3974] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--775dbdb97f--rmk2w-eth0", GenerateName:"whisker-775dbdb97f-", Namespace:"calico-system", SelfLink:"", UID:"db92007e-a07a-40c5-aa7a-9a7981e0ad4e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775dbdb97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-775dbdb97f-rmk2w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5eb965d9305", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:05.748315 containerd[1461]: 2026-01-20 00:54:05.722 [INFO][3974] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" Jan 20 00:54:05.748315 containerd[1461]: 2026-01-20 00:54:05.722 [INFO][3974] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5eb965d9305 ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" Jan 20 00:54:05.748315 containerd[1461]: 2026-01-20 00:54:05.731 [INFO][3974] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" Jan 20 00:54:05.748315 containerd[1461]: 2026-01-20 00:54:05.731 [INFO][3974] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--775dbdb97f--rmk2w-eth0", GenerateName:"whisker-775dbdb97f-", Namespace:"calico-system", SelfLink:"", UID:"db92007e-a07a-40c5-aa7a-9a7981e0ad4e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775dbdb97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c", Pod:"whisker-775dbdb97f-rmk2w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5eb965d9305", MAC:"4a:03:16:c2:37:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:05.748315 containerd[1461]: 2026-01-20 00:54:05.744 [INFO][3974] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c" Namespace="calico-system" Pod="whisker-775dbdb97f-rmk2w" WorkloadEndpoint="localhost-k8s-whisker--775dbdb97f--rmk2w-eth0" Jan 20 00:54:05.781041 containerd[1461]: time="2026-01-20T00:54:05.780918631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:05.781041 containerd[1461]: time="2026-01-20T00:54:05.780973763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:05.781041 containerd[1461]: time="2026-01-20T00:54:05.780985245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:05.781202 containerd[1461]: time="2026-01-20T00:54:05.781059995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:05.806242 systemd[1]: Started cri-containerd-aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c.scope - libcontainer container aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c. Jan 20 00:54:05.817664 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:05.841237 containerd[1461]: time="2026-01-20T00:54:05.841202426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775dbdb97f-rmk2w,Uid:db92007e-a07a-40c5-aa7a-9a7981e0ad4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"aee04a3d6ff14f40079e99c53a974f3384e759561117083ecc031a0ea4d8b81c\"" Jan 20 00:54:05.843939 containerd[1461]: time="2026-01-20T00:54:05.843914678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:54:05.921415 containerd[1461]: time="2026-01-20T00:54:05.921288352Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:05.928894 containerd[1461]: time="2026-01-20T00:54:05.922825011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:54:05.928894 containerd[1461]: time="2026-01-20T00:54:05.922954731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:54:05.929183 kubelet[2520]: E0120 00:54:05.929068 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:54:05.929183 kubelet[2520]: E0120 00:54:05.929161 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:54:05.929369 kubelet[2520]: E0120 00:54:05.929332 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:df9856728c384f29b8c73d14eea5ef90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4fzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775dbdb97f-rmk2w_calico-system(db92007e-a07a-40c5-aa7a-9a7981e0ad4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:05.931369 containerd[1461]: time="2026-01-20T00:54:05.931286193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:54:05.993741 containerd[1461]: time="2026-01-20T00:54:05.993612541Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:05.995037 containerd[1461]: time="2026-01-20T00:54:05.994956358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:54:05.995037 containerd[1461]: time="2026-01-20T00:54:05.995010215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:54:05.995203 kubelet[2520]: E0120 00:54:05.995168 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:54:05.995255 kubelet[2520]: E0120 00:54:05.995207 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:54:05.995377 kubelet[2520]: E0120 00:54:05.995319 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4fzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775dbdb97f-rmk2w_calico-system(db92007e-a07a-40c5-aa7a-9a7981e0ad4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:05.997251 kubelet[2520]: E0120 00:54:05.997150 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775dbdb97f-rmk2w" podUID="db92007e-a07a-40c5-aa7a-9a7981e0ad4e" Jan 20 00:54:06.237054 kubelet[2520]: E0120 00:54:06.236903 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775dbdb97f-rmk2w" podUID="db92007e-a07a-40c5-aa7a-9a7981e0ad4e" Jan 20 00:54:07.100582 kubelet[2520]: I0120 00:54:07.100504 2520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e775ec00-cac2-4d62-a758-0e2d28913a84" path="/var/lib/kubelet/pods/e775ec00-cac2-4d62-a758-0e2d28913a84/volumes" Jan 20 00:54:07.136325 systemd-networkd[1384]: cali5eb965d9305: Gained IPv6LL Jan 20 00:54:07.238355 kubelet[2520]: E0120 00:54:07.238292 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775dbdb97f-rmk2w" podUID="db92007e-a07a-40c5-aa7a-9a7981e0ad4e" Jan 20 00:54:10.098698 containerd[1461]: time="2026-01-20T00:54:10.098559724Z" level=info msg="StopPodSandbox for \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\"" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.145 [INFO][4159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.145 [INFO][4159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" iface="eth0" netns="/var/run/netns/cni-6992153e-1124-0918-6363-a54c2371c30e" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.145 [INFO][4159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" iface="eth0" netns="/var/run/netns/cni-6992153e-1124-0918-6363-a54c2371c30e" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.146 [INFO][4159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" iface="eth0" netns="/var/run/netns/cni-6992153e-1124-0918-6363-a54c2371c30e" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.146 [INFO][4159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.146 [INFO][4159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.168 [INFO][4168] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.168 [INFO][4168] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.168 [INFO][4168] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.173 [WARNING][4168] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.173 [INFO][4168] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.175 [INFO][4168] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:10.180041 containerd[1461]: 2026-01-20 00:54:10.177 [INFO][4159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:10.180504 containerd[1461]: time="2026-01-20T00:54:10.180254549Z" level=info msg="TearDown network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\" successfully" Jan 20 00:54:10.180504 containerd[1461]: time="2026-01-20T00:54:10.180274326Z" level=info msg="StopPodSandbox for \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\" returns successfully" Jan 20 00:54:10.181056 containerd[1461]: time="2026-01-20T00:54:10.180993896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-4n657,Uid:0d6351d0-021b-40ba-9cae-6912429b9dd9,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:54:10.182606 systemd[1]: run-netns-cni\x2d6992153e\x2d1124\x2d0918\x2d6363\x2da54c2371c30e.mount: Deactivated successfully. Jan 20 00:54:10.299961 systemd-networkd[1384]: calia1b89a70390: Link UP Jan 20 00:54:10.300395 systemd-networkd[1384]: calia1b89a70390: Gained carrier Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.224 [INFO][4177] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.234 [INFO][4177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0 calico-apiserver-54d67dfb4- calico-apiserver 0d6351d0-021b-40ba-9cae-6912429b9dd9 932 0 2026-01-20 00:53:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54d67dfb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54d67dfb4-4n657 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia1b89a70390 [] [] }} ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.234 [INFO][4177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.260 [INFO][4191] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" HandleID="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.260 [INFO][4191] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" HandleID="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54d67dfb4-4n657", "timestamp":"2026-01-20 00:54:10.26061188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.260 [INFO][4191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.261 [INFO][4191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.261 [INFO][4191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.267 [INFO][4191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.273 [INFO][4191] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.277 [INFO][4191] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.279 [INFO][4191] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.281 [INFO][4191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.281 [INFO][4191] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.282 [INFO][4191] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.288 [INFO][4191] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.293 [INFO][4191] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.293 [INFO][4191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" host="localhost" Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.293 [INFO][4191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:10.318460 containerd[1461]: 2026-01-20 00:54:10.293 [INFO][4191] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" HandleID="k8s-pod-network.68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.319248 containerd[1461]: 2026-01-20 00:54:10.296 [INFO][4177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6351d0-021b-40ba-9cae-6912429b9dd9", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54d67dfb4-4n657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1b89a70390", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:10.319248 containerd[1461]: 2026-01-20 00:54:10.296 [INFO][4177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.319248 containerd[1461]: 2026-01-20 00:54:10.296 [INFO][4177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1b89a70390 ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.319248 containerd[1461]: 2026-01-20 00:54:10.300 [INFO][4177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.319248 containerd[1461]: 2026-01-20 00:54:10.301 [INFO][4177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6351d0-021b-40ba-9cae-6912429b9dd9", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d", Pod:"calico-apiserver-54d67dfb4-4n657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1b89a70390", MAC:"2e:7d:f1:8d:51:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:10.319248 containerd[1461]: 2026-01-20 00:54:10.315 [INFO][4177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-4n657" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:10.341158 containerd[1461]: time="2026-01-20T00:54:10.340931081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:10.341702 containerd[1461]: time="2026-01-20T00:54:10.341558449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:10.341702 containerd[1461]: time="2026-01-20T00:54:10.341674535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:10.341874 containerd[1461]: time="2026-01-20T00:54:10.341767649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:10.366243 systemd[1]: Started cri-containerd-68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d.scope - libcontainer container 68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d. Jan 20 00:54:10.383408 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:10.413547 containerd[1461]: time="2026-01-20T00:54:10.413501637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-4n657,Uid:0d6351d0-021b-40ba-9cae-6912429b9dd9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d\"" Jan 20 00:54:10.415347 containerd[1461]: time="2026-01-20T00:54:10.415250411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:54:10.482519 containerd[1461]: time="2026-01-20T00:54:10.482407031Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:10.484168 containerd[1461]: time="2026-01-20T00:54:10.484029984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:54:10.484168 containerd[1461]: time="2026-01-20T00:54:10.484132161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:10.484602 kubelet[2520]: E0120 00:54:10.484513 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:10.484602 kubelet[2520]: E0120 00:54:10.484578 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:10.484985 kubelet[2520]: E0120 00:54:10.484788 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wt2xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54d67dfb4-4n657_calico-apiserver(0d6351d0-021b-40ba-9cae-6912429b9dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:10.486152 kubelet[2520]: E0120 00:54:10.486031 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:54:10.544982 kubelet[2520]: I0120 00:54:10.544945 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:54:10.545478 kubelet[2520]: E0120 00:54:10.545408 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:10.858131 kernel: bpftool[4268]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 00:54:10.967816 systemd[1]: Started sshd@7-10.0.0.160:22-10.0.0.1:48700.service - OpenSSH per-connection server daemon (10.0.0.1:48700). Jan 20 00:54:11.029812 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:11.031810 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:11.038690 systemd-logind[1444]: New session 8 of user core. Jan 20 00:54:11.054471 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:54:11.099853 containerd[1461]: time="2026-01-20T00:54:11.099797741Z" level=info msg="StopPodSandbox for \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\"" Jan 20 00:54:11.101910 containerd[1461]: time="2026-01-20T00:54:11.101883022Z" level=info msg="StopPodSandbox for \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\"" Jan 20 00:54:11.171860 systemd-networkd[1384]: vxlan.calico: Link UP Jan 20 00:54:11.171870 systemd-networkd[1384]: vxlan.calico: Gained carrier Jan 20 00:54:11.249751 kubelet[2520]: E0120 00:54:11.249669 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:11.250919 kubelet[2520]: E0120 00:54:11.250555 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.190 [INFO][4357] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.191 [INFO][4357] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" iface="eth0" netns="/var/run/netns/cni-a2576cb1-c66f-223a-7302-f256e2f6a904" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.191 [INFO][4357] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" iface="eth0" netns="/var/run/netns/cni-a2576cb1-c66f-223a-7302-f256e2f6a904" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.192 [INFO][4357] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" iface="eth0" netns="/var/run/netns/cni-a2576cb1-c66f-223a-7302-f256e2f6a904" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.192 [INFO][4357] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.192 [INFO][4357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.235 [INFO][4389] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.236 [INFO][4389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.236 [INFO][4389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.253 [WARNING][4389] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.253 [INFO][4389] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.256 [INFO][4389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:11.279146 containerd[1461]: 2026-01-20 00:54:11.264 [INFO][4357] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:11.281618 sshd[4285]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:11.282737 systemd[1]: run-netns-cni\x2da2576cb1\x2dc66f\x2d223a\x2d7302\x2df256e2f6a904.mount: Deactivated successfully. Jan 20 00:54:11.283189 containerd[1461]: time="2026-01-20T00:54:11.283126836Z" level=info msg="TearDown network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\" successfully" Jan 20 00:54:11.283189 containerd[1461]: time="2026-01-20T00:54:11.283157783Z" level=info msg="StopPodSandbox for \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\" returns successfully" Jan 20 00:54:11.286156 containerd[1461]: time="2026-01-20T00:54:11.285195557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hm58c,Uid:2f383b2b-693c-42c3-b0a3-10cbb7e70071,Namespace:calico-system,Attempt:1,}" Jan 20 00:54:11.288817 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:54:11.289349 systemd[1]: sshd@7-10.0.0.160:22-10.0.0.1:48700.service: Deactivated successfully. Jan 20 00:54:11.291942 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.182 [INFO][4358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.183 [INFO][4358] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" iface="eth0" netns="/var/run/netns/cni-f918c490-c81e-3a4f-10eb-23ccf7748161" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.183 [INFO][4358] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" iface="eth0" netns="/var/run/netns/cni-f918c490-c81e-3a4f-10eb-23ccf7748161" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.183 [INFO][4358] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" iface="eth0" netns="/var/run/netns/cni-f918c490-c81e-3a4f-10eb-23ccf7748161" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.183 [INFO][4358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.183 [INFO][4358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.239 [INFO][4384] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.239 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.256 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.267 [WARNING][4384] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.269 [INFO][4384] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.275 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:11.294011 containerd[1461]: 2026-01-20 00:54:11.288 [INFO][4358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:11.295368 containerd[1461]: time="2026-01-20T00:54:11.295311937Z" level=info msg="TearDown network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\" successfully" Jan 20 00:54:11.295368 containerd[1461]: time="2026-01-20T00:54:11.295352622Z" level=info msg="StopPodSandbox for \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\" returns successfully" Jan 20 00:54:11.295704 kubelet[2520]: E0120 00:54:11.295661 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:11.296314 containerd[1461]: time="2026-01-20T00:54:11.296279854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d8bck,Uid:d8996315-c1bc-44a5-b42d-133ff549c4ad,Namespace:kube-system,Attempt:1,}" Jan 20 00:54:11.299687 systemd[1]: run-netns-cni\x2df918c490\x2dc81e\x2d3a4f\x2d10eb\x2d23ccf7748161.mount: Deactivated successfully. Jan 20 00:54:11.313175 systemd-logind[1444]: Removed session 8. Jan 20 00:54:11.467322 systemd-networkd[1384]: cali8fcda4eb047: Link UP Jan 20 00:54:11.468605 systemd-networkd[1384]: cali8fcda4eb047: Gained carrier Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.387 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--d8bck-eth0 coredns-674b8bbfcf- kube-system d8996315-c1bc-44a5-b42d-133ff549c4ad 981 0 2026-01-20 00:53:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-d8bck eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8fcda4eb047 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.387 [INFO][4433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.417 [INFO][4449] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" HandleID="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.417 [INFO][4449] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" HandleID="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-d8bck", "timestamp":"2026-01-20 00:54:11.417674704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.417 [INFO][4449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.417 [INFO][4449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.417 [INFO][4449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.425 [INFO][4449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.436 [INFO][4449] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.443 [INFO][4449] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.445 [INFO][4449] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.448 [INFO][4449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.448 [INFO][4449] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.449 [INFO][4449] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.453 [INFO][4449] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.459 [INFO][4449] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.459 [INFO][4449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" host="localhost" Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.459 [INFO][4449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:11.486251 containerd[1461]: 2026-01-20 00:54:11.459 [INFO][4449] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" HandleID="k8s-pod-network.5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.486739 containerd[1461]: 2026-01-20 00:54:11.463 [INFO][4433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--d8bck-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8996315-c1bc-44a5-b42d-133ff549c4ad", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-d8bck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fcda4eb047", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:11.486739 containerd[1461]: 2026-01-20 00:54:11.463 [INFO][4433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.486739 containerd[1461]: 2026-01-20 00:54:11.463 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fcda4eb047 ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.486739 containerd[1461]: 2026-01-20 00:54:11.470 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.486739 containerd[1461]: 2026-01-20 00:54:11.471 [INFO][4433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--d8bck-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8996315-c1bc-44a5-b42d-133ff549c4ad", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae", Pod:"coredns-674b8bbfcf-d8bck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fcda4eb047", MAC:"b2:a1:25:e7:6e:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:11.486739 containerd[1461]: 2026-01-20 00:54:11.483 [INFO][4433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae" Namespace="kube-system" Pod="coredns-674b8bbfcf-d8bck" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:11.508546 containerd[1461]: time="2026-01-20T00:54:11.508454564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:11.509403 containerd[1461]: time="2026-01-20T00:54:11.508555183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:11.509403 containerd[1461]: time="2026-01-20T00:54:11.508577454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:11.509403 containerd[1461]: time="2026-01-20T00:54:11.508694533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:11.537376 systemd[1]: Started cri-containerd-5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae.scope - libcontainer container 5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae. Jan 20 00:54:11.555554 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:11.592387 systemd-networkd[1384]: cali2a9746247c3: Link UP Jan 20 00:54:11.593804 systemd-networkd[1384]: cali2a9746247c3: Gained carrier Jan 20 00:54:11.608571 containerd[1461]: time="2026-01-20T00:54:11.607941602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d8bck,Uid:d8996315-c1bc-44a5-b42d-133ff549c4ad,Namespace:kube-system,Attempt:1,} returns sandbox id \"5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae\"" Jan 20 00:54:11.615153 kubelet[2520]: E0120 00:54:11.615119 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.384 [INFO][4421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hm58c-eth0 csi-node-driver- calico-system 2f383b2b-693c-42c3-b0a3-10cbb7e70071 982 0 2026-01-20 00:53:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hm58c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2a9746247c3 [] [] }} ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.384 [INFO][4421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.422 [INFO][4451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" HandleID="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.422 [INFO][4451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" HandleID="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aac80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hm58c", "timestamp":"2026-01-20 00:54:11.422272873 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.422 [INFO][4451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.459 [INFO][4451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.459 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.526 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.536 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.548 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.552 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.559 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.559 [INFO][4451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.564 [INFO][4451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.577 [INFO][4451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.583 [INFO][4451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.584 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" host="localhost" Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.584 [INFO][4451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:11.617583 containerd[1461]: 2026-01-20 00:54:11.584 [INFO][4451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" HandleID="k8s-pod-network.eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.618306 containerd[1461]: 2026-01-20 00:54:11.587 [INFO][4421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hm58c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f383b2b-693c-42c3-b0a3-10cbb7e70071", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hm58c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a9746247c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:11.618306 containerd[1461]: 2026-01-20 00:54:11.587 [INFO][4421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.618306 containerd[1461]: 2026-01-20 00:54:11.587 [INFO][4421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a9746247c3 ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.618306 containerd[1461]: 2026-01-20 00:54:11.593 [INFO][4421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.618306 containerd[1461]: 2026-01-20 00:54:11.595 [INFO][4421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hm58c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f383b2b-693c-42c3-b0a3-10cbb7e70071", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed", Pod:"csi-node-driver-hm58c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a9746247c3", MAC:"ba:2e:e3:13:73:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:11.618306 containerd[1461]: 2026-01-20 00:54:11.610 [INFO][4421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed" Namespace="calico-system" Pod="csi-node-driver-hm58c" WorkloadEndpoint="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:11.625548 containerd[1461]: time="2026-01-20T00:54:11.625190780Z" level=info msg="CreateContainer within sandbox \"5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:54:11.643014 containerd[1461]: time="2026-01-20T00:54:11.642900635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:11.643056 containerd[1461]: time="2026-01-20T00:54:11.642964985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:11.643056 containerd[1461]: time="2026-01-20T00:54:11.642983540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:11.643327 containerd[1461]: time="2026-01-20T00:54:11.643216094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:11.648501 containerd[1461]: time="2026-01-20T00:54:11.648370241Z" level=info msg="CreateContainer within sandbox \"5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c765701c79f31b7341abbcde4816e6c79da91fb81248dfef27e507323e01a925\"" Jan 20 00:54:11.650063 containerd[1461]: time="2026-01-20T00:54:11.650010400Z" level=info msg="StartContainer for \"c765701c79f31b7341abbcde4816e6c79da91fb81248dfef27e507323e01a925\"" Jan 20 00:54:11.668261 systemd[1]: Started cri-containerd-eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed.scope - libcontainer container eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed. Jan 20 00:54:11.689419 systemd[1]: Started cri-containerd-c765701c79f31b7341abbcde4816e6c79da91fb81248dfef27e507323e01a925.scope - libcontainer container c765701c79f31b7341abbcde4816e6c79da91fb81248dfef27e507323e01a925. Jan 20 00:54:11.693369 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:11.711492 containerd[1461]: time="2026-01-20T00:54:11.711452307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hm58c,Uid:2f383b2b-693c-42c3-b0a3-10cbb7e70071,Namespace:calico-system,Attempt:1,} returns sandbox id \"eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed\"" Jan 20 00:54:11.715160 containerd[1461]: time="2026-01-20T00:54:11.715025065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:54:11.727112 containerd[1461]: time="2026-01-20T00:54:11.726932849Z" level=info msg="StartContainer for \"c765701c79f31b7341abbcde4816e6c79da91fb81248dfef27e507323e01a925\" returns successfully" Jan 20 00:54:11.776448 containerd[1461]: time="2026-01-20T00:54:11.776362115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:11.785479 containerd[1461]: time="2026-01-20T00:54:11.785407892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:54:11.785538 containerd[1461]: time="2026-01-20T00:54:11.785444729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:54:11.785723 kubelet[2520]: E0120 00:54:11.785622 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:11.785723 kubelet[2520]: E0120 00:54:11.785713 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:11.785914 kubelet[2520]: E0120 00:54:11.785833 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24hnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:11.788501 containerd[1461]: time="2026-01-20T00:54:11.788442880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:54:11.847600 containerd[1461]: time="2026-01-20T00:54:11.847553413Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:11.849143 containerd[1461]: time="2026-01-20T00:54:11.849028527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:54:11.849143 containerd[1461]: time="2026-01-20T00:54:11.849055755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:54:11.849386 kubelet[2520]: E0120 00:54:11.849332 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:11.849439 kubelet[2520]: E0120 00:54:11.849400 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:11.849618 kubelet[2520]: E0120 00:54:11.849538 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24hnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:11.850872 kubelet[2520]: E0120 00:54:11.850828 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:54:12.098307 containerd[1461]: time="2026-01-20T00:54:12.097959073Z" level=info msg="StopPodSandbox for \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\"" Jan 20 00:54:12.099376 containerd[1461]: time="2026-01-20T00:54:12.098755822Z" level=info msg="StopPodSandbox for \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\"" Jan 20 00:54:12.099376 containerd[1461]: time="2026-01-20T00:54:12.098890418Z" level=info msg="StopPodSandbox for \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\"" Jan 20 00:54:12.099376 containerd[1461]: time="2026-01-20T00:54:12.099004604Z" level=info msg="StopPodSandbox for \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\"" Jan 20 00:54:12.128692 systemd-networkd[1384]: calia1b89a70390: Gained IPv6LL Jan 20 00:54:12.192317 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.188 [INFO][4683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.188 [INFO][4683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" iface="eth0" netns="/var/run/netns/cni-5c831c9e-cee6-300c-e7fb-aef0fee254ac" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.189 [INFO][4683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" iface="eth0" netns="/var/run/netns/cni-5c831c9e-cee6-300c-e7fb-aef0fee254ac" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.193 [INFO][4683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" iface="eth0" netns="/var/run/netns/cni-5c831c9e-cee6-300c-e7fb-aef0fee254ac" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.193 [INFO][4683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.193 [INFO][4683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.228 [INFO][4730] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.229 [INFO][4730] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.229 [INFO][4730] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.235 [WARNING][4730] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.235 [INFO][4730] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.237 [INFO][4730] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.246000 containerd[1461]: 2026-01-20 00:54:12.243 [INFO][4683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:12.247431 containerd[1461]: time="2026-01-20T00:54:12.247365218Z" level=info msg="TearDown network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\" successfully" Jan 20 00:54:12.247552 containerd[1461]: time="2026-01-20T00:54:12.247459293Z" level=info msg="StopPodSandbox for \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\" returns successfully" Jan 20 00:54:12.248654 containerd[1461]: time="2026-01-20T00:54:12.248527124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6667b974c7-vm9zj,Uid:38bfb1da-e948-47f5-8ec0-b14e509cc2d8,Namespace:calico-system,Attempt:1,}" Jan 20 00:54:12.256701 kubelet[2520]: E0120 00:54:12.256242 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:12.271138 kubelet[2520]: E0120 00:54:12.265219 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:54:12.271138 kubelet[2520]: E0120 00:54:12.266694 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.173 [INFO][4695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.176 [INFO][4695] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" iface="eth0" netns="/var/run/netns/cni-f8761f36-176b-ee15-d06b-ed84b09b9c3b" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.176 [INFO][4695] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" iface="eth0" netns="/var/run/netns/cni-f8761f36-176b-ee15-d06b-ed84b09b9c3b" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.176 [INFO][4695] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" iface="eth0" netns="/var/run/netns/cni-f8761f36-176b-ee15-d06b-ed84b09b9c3b" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.176 [INFO][4695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.176 [INFO][4695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.228 [INFO][4724] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.229 [INFO][4724] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.237 [INFO][4724] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.244 [WARNING][4724] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.244 [INFO][4724] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.248 [INFO][4724] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.271348 containerd[1461]: 2026-01-20 00:54:12.254 [INFO][4695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:12.271348 containerd[1461]: time="2026-01-20T00:54:12.270720828Z" level=info msg="TearDown network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\" successfully" Jan 20 00:54:12.271348 containerd[1461]: time="2026-01-20T00:54:12.270742709Z" level=info msg="StopPodSandbox for \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\" returns successfully" Jan 20 00:54:12.271707 containerd[1461]: time="2026-01-20T00:54:12.271605460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dnc6k,Uid:5673b085-3f3e-4250-ba9e-85fa33b4899b,Namespace:kube-system,Attempt:1,}" Jan 20 00:54:12.271735 kubelet[2520]: E0120 00:54:12.271341 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.207 [INFO][4694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.207 [INFO][4694] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" iface="eth0" netns="/var/run/netns/cni-6dbd9c05-08f3-f00b-9f14-688949aa15ef" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.208 [INFO][4694] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" iface="eth0" netns="/var/run/netns/cni-6dbd9c05-08f3-f00b-9f14-688949aa15ef" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.208 [INFO][4694] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" iface="eth0" netns="/var/run/netns/cni-6dbd9c05-08f3-f00b-9f14-688949aa15ef" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.208 [INFO][4694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.208 [INFO][4694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.239 [INFO][4742] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.239 [INFO][4742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.248 [INFO][4742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.257 [WARNING][4742] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.259 [INFO][4742] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.262 [INFO][4742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.274029 containerd[1461]: 2026-01-20 00:54:12.267 [INFO][4694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:12.274811 containerd[1461]: time="2026-01-20T00:54:12.274737024Z" level=info msg="TearDown network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\" successfully" Jan 20 00:54:12.275601 containerd[1461]: time="2026-01-20T00:54:12.275538972Z" level=info msg="StopPodSandbox for \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\" returns successfully" Jan 20 00:54:12.284445 systemd[1]: run-netns-cni\x2df8761f36\x2d176b\x2dee15\x2dd06b\x2ded84b09b9c3b.mount: Deactivated successfully. Jan 20 00:54:12.284894 systemd[1]: run-netns-cni\x2d6dbd9c05\x2d08f3\x2df00b\x2d9f14\x2d688949aa15ef.mount: Deactivated successfully. Jan 20 00:54:12.285030 systemd[1]: run-netns-cni\x2d5c831c9e\x2dcee6\x2d300c\x2de7fb\x2daef0fee254ac.mount: Deactivated successfully. Jan 20 00:54:12.288433 containerd[1461]: time="2026-01-20T00:54:12.287787331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-f8mz9,Uid:84cb4cb2-d928-4fed-bf18-3918ea335ce0,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.197 [INFO][4696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.197 [INFO][4696] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" iface="eth0" netns="/var/run/netns/cni-5a8706c1-abe8-59eb-a2af-3afc1dd3c7c7" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.199 [INFO][4696] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" iface="eth0" netns="/var/run/netns/cni-5a8706c1-abe8-59eb-a2af-3afc1dd3c7c7" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.199 [INFO][4696] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" iface="eth0" netns="/var/run/netns/cni-5a8706c1-abe8-59eb-a2af-3afc1dd3c7c7" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.200 [INFO][4696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.200 [INFO][4696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.241 [INFO][4735] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.243 [INFO][4735] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.262 [INFO][4735] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.284 [WARNING][4735] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.284 [INFO][4735] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.287 [INFO][4735] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.299557 containerd[1461]: 2026-01-20 00:54:12.296 [INFO][4696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:12.304737 containerd[1461]: time="2026-01-20T00:54:12.304661946Z" level=info msg="TearDown network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\" successfully" Jan 20 00:54:12.304737 containerd[1461]: time="2026-01-20T00:54:12.304704305Z" level=info msg="StopPodSandbox for \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\" returns successfully" Jan 20 00:54:12.312972 kubelet[2520]: I0120 00:54:12.305687 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d8bck" podStartSLOduration=32.305665499 podStartE2EDuration="32.305665499s" podCreationTimestamp="2026-01-20 00:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:54:12.279214136 +0000 UTC m=+37.271977663" watchObservedRunningTime="2026-01-20 00:54:12.305665499 +0000 UTC m=+37.298429026" Jan 20 00:54:12.313132 containerd[1461]: time="2026-01-20T00:54:12.307511084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p8vbg,Uid:7e620ed7-8827-4f4a-b020-5c5456115c9e,Namespace:calico-system,Attempt:1,}" Jan 20 00:54:12.307450 systemd[1]: run-netns-cni\x2d5a8706c1\x2dabe8\x2d59eb\x2da2af\x2d3afc1dd3c7c7.mount: Deactivated successfully. Jan 20 00:54:12.509302 systemd-networkd[1384]: cali8914068f4a9: Link UP Jan 20 00:54:12.510136 systemd-networkd[1384]: cali8914068f4a9: Gained carrier Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.404 [INFO][4760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0 calico-kube-controllers-6667b974c7- calico-system 38bfb1da-e948-47f5-8ec0-b14e509cc2d8 1020 0 2026-01-20 00:53:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6667b974c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6667b974c7-vm9zj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8914068f4a9 [] [] }} ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.405 [INFO][4760] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.451 [INFO][4813] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" HandleID="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.451 [INFO][4813] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" HandleID="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6667b974c7-vm9zj", "timestamp":"2026-01-20 00:54:12.451198319 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.451 [INFO][4813] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.451 [INFO][4813] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.451 [INFO][4813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.458 [INFO][4813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.466 [INFO][4813] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.476 [INFO][4813] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.478 [INFO][4813] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.481 [INFO][4813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.481 [INFO][4813] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.484 [INFO][4813] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50 Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.489 [INFO][4813] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.498 [INFO][4813] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.498 [INFO][4813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" host="localhost" Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.498 [INFO][4813] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.522767 containerd[1461]: 2026-01-20 00:54:12.498 [INFO][4813] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" HandleID="k8s-pod-network.0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.523520 containerd[1461]: 2026-01-20 00:54:12.505 [INFO][4760] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0", GenerateName:"calico-kube-controllers-6667b974c7-", Namespace:"calico-system", SelfLink:"", UID:"38bfb1da-e948-47f5-8ec0-b14e509cc2d8", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6667b974c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6667b974c7-vm9zj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8914068f4a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.523520 containerd[1461]: 2026-01-20 00:54:12.505 [INFO][4760] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.523520 containerd[1461]: 2026-01-20 00:54:12.505 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8914068f4a9 ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.523520 containerd[1461]: 2026-01-20 00:54:12.509 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.523520 containerd[1461]: 2026-01-20 00:54:12.509 [INFO][4760] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0", GenerateName:"calico-kube-controllers-6667b974c7-", Namespace:"calico-system", SelfLink:"", UID:"38bfb1da-e948-47f5-8ec0-b14e509cc2d8", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6667b974c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50", Pod:"calico-kube-controllers-6667b974c7-vm9zj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8914068f4a9", MAC:"aa:a4:37:eb:1a:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.523520 containerd[1461]: 2026-01-20 00:54:12.518 [INFO][4760] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50" Namespace="calico-system" Pod="calico-kube-controllers-6667b974c7-vm9zj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:12.555018 containerd[1461]: time="2026-01-20T00:54:12.554697916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:12.555018 containerd[1461]: time="2026-01-20T00:54:12.554787152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:12.555018 containerd[1461]: time="2026-01-20T00:54:12.554807120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.555018 containerd[1461]: time="2026-01-20T00:54:12.554951658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.581370 systemd[1]: Started cri-containerd-0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50.scope - libcontainer container 0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50. Jan 20 00:54:12.597528 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:12.599706 systemd-networkd[1384]: cali0fdaacd9276: Link UP Jan 20 00:54:12.601281 systemd-networkd[1384]: cali0fdaacd9276: Gained carrier Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.431 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0 calico-apiserver-54d67dfb4- calico-apiserver 84cb4cb2-d928-4fed-bf18-3918ea335ce0 1022 0 2026-01-20 00:53:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54d67dfb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54d67dfb4-f8mz9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0fdaacd9276 [] [] }} ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.433 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.475 [INFO][4822] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" HandleID="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.475 [INFO][4822] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" HandleID="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000487ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54d67dfb4-f8mz9", "timestamp":"2026-01-20 00:54:12.475335207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.475 [INFO][4822] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.498 [INFO][4822] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.499 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.559 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.566 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.573 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.575 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.577 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.578 [INFO][4822] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.579 [INFO][4822] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.583 [INFO][4822] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.592 [INFO][4822] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.592 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" host="localhost" Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.592 [INFO][4822] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.615329 containerd[1461]: 2026-01-20 00:54:12.592 [INFO][4822] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" HandleID="k8s-pod-network.6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.615842 containerd[1461]: 2026-01-20 00:54:12.594 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"84cb4cb2-d928-4fed-bf18-3918ea335ce0", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54d67dfb4-f8mz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fdaacd9276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.615842 containerd[1461]: 2026-01-20 00:54:12.594 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.615842 containerd[1461]: 2026-01-20 00:54:12.594 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fdaacd9276 ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.615842 containerd[1461]: 2026-01-20 00:54:12.602 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.615842 containerd[1461]: 2026-01-20 00:54:12.603 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"84cb4cb2-d928-4fed-bf18-3918ea335ce0", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e", Pod:"calico-apiserver-54d67dfb4-f8mz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fdaacd9276", MAC:"92:c8:47:e5:1d:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.615842 containerd[1461]: 2026-01-20 00:54:12.611 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e" Namespace="calico-apiserver" Pod="calico-apiserver-54d67dfb4-f8mz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:12.630868 containerd[1461]: time="2026-01-20T00:54:12.630800639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6667b974c7-vm9zj,Uid:38bfb1da-e948-47f5-8ec0-b14e509cc2d8,Namespace:calico-system,Attempt:1,} returns sandbox id \"0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50\"" Jan 20 00:54:12.633932 containerd[1461]: time="2026-01-20T00:54:12.633881279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:54:12.647047 containerd[1461]: time="2026-01-20T00:54:12.646170216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:12.647047 containerd[1461]: time="2026-01-20T00:54:12.646988593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:12.647047 containerd[1461]: time="2026-01-20T00:54:12.647007018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.647962 containerd[1461]: time="2026-01-20T00:54:12.647906958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.675261 systemd[1]: Started cri-containerd-6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e.scope - libcontainer container 6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e. Jan 20 00:54:12.689892 containerd[1461]: time="2026-01-20T00:54:12.689801883Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:12.690770 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:12.691935 containerd[1461]: time="2026-01-20T00:54:12.691819569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:54:12.693185 containerd[1461]: time="2026-01-20T00:54:12.691972699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:54:12.693256 kubelet[2520]: E0120 00:54:12.692547 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:54:12.693256 kubelet[2520]: E0120 00:54:12.692596 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:54:12.693256 kubelet[2520]: E0120 00:54:12.692801 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9kjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6667b974c7-vm9zj_calico-system(38bfb1da-e948-47f5-8ec0-b14e509cc2d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:12.694838 kubelet[2520]: E0120 00:54:12.694054 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" podUID="38bfb1da-e948-47f5-8ec0-b14e509cc2d8" Jan 20 00:54:12.706160 systemd-networkd[1384]: cali6a5711dbdf7: Link UP Jan 20 00:54:12.706370 systemd-networkd[1384]: cali6a5711dbdf7: Gained carrier Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.429 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0 coredns-674b8bbfcf- kube-system 5673b085-3f3e-4250-ba9e-85fa33b4899b 1019 0 2026-01-20 00:53:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dnc6k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a5711dbdf7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.430 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.492 [INFO][4821] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" HandleID="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.493 [INFO][4821] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" HandleID="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dnc6k", "timestamp":"2026-01-20 00:54:12.492728563 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.493 [INFO][4821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.592 [INFO][4821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.592 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.664 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.670 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.676 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.678 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.681 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.681 [INFO][4821] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.683 [INFO][4821] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.688 [INFO][4821] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.695 [INFO][4821] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.695 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" host="localhost" Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.695 [INFO][4821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.724446 containerd[1461]: 2026-01-20 00:54:12.695 [INFO][4821] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" HandleID="k8s-pod-network.ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.725046 containerd[1461]: 2026-01-20 00:54:12.699 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5673b085-3f3e-4250-ba9e-85fa33b4899b", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dnc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5711dbdf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.725046 containerd[1461]: 2026-01-20 00:54:12.701 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.725046 containerd[1461]: 2026-01-20 00:54:12.702 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a5711dbdf7 ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.725046 containerd[1461]: 2026-01-20 00:54:12.707 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.725046 containerd[1461]: 2026-01-20 00:54:12.711 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5673b085-3f3e-4250-ba9e-85fa33b4899b", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b", Pod:"coredns-674b8bbfcf-dnc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5711dbdf7", MAC:"7a:da:35:21:31:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.725046 containerd[1461]: 2026-01-20 00:54:12.720 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b" Namespace="kube-system" Pod="coredns-674b8bbfcf-dnc6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:12.736309 containerd[1461]: time="2026-01-20T00:54:12.736240443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d67dfb4-f8mz9,Uid:84cb4cb2-d928-4fed-bf18-3918ea335ce0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e\"" Jan 20 00:54:12.738622 containerd[1461]: time="2026-01-20T00:54:12.738250878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:54:12.751271 containerd[1461]: time="2026-01-20T00:54:12.750937103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:12.751271 containerd[1461]: time="2026-01-20T00:54:12.750988940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:12.751271 containerd[1461]: time="2026-01-20T00:54:12.751001463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.751271 containerd[1461]: time="2026-01-20T00:54:12.751143699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.769449 systemd-networkd[1384]: cali2a9746247c3: Gained IPv6LL Jan 20 00:54:12.778255 systemd[1]: Started cri-containerd-ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b.scope - libcontainer container ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b. Jan 20 00:54:12.793293 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:12.802189 systemd-networkd[1384]: cali53aff377052: Link UP Jan 20 00:54:12.802454 systemd-networkd[1384]: cali53aff377052: Gained carrier Jan 20 00:54:12.815975 containerd[1461]: time="2026-01-20T00:54:12.815943555Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:12.818562 containerd[1461]: time="2026-01-20T00:54:12.818523807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:54:12.818683 containerd[1461]: time="2026-01-20T00:54:12.818600480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:12.818870 kubelet[2520]: E0120 00:54:12.818824 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:12.818917 kubelet[2520]: E0120 00:54:12.818884 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:12.819047 kubelet[2520]: E0120 00:54:12.819000 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gxl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54d67dfb4-f8mz9_calico-apiserver(84cb4cb2-d928-4fed-bf18-3918ea335ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:12.820349 kubelet[2520]: E0120 00:54:12.820280 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.464 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--p8vbg-eth0 goldmane-666569f655- calico-system 7e620ed7-8827-4f4a-b020-5c5456115c9e 1021 0 2026-01-20 00:53:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-p8vbg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali53aff377052 [] [] }} ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.465 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.504 [INFO][4839] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" HandleID="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.504 [INFO][4839] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" HandleID="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325b90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-p8vbg", "timestamp":"2026-01-20 00:54:12.504176527 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.504 [INFO][4839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.695 [INFO][4839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.695 [INFO][4839] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.761 [INFO][4839] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.773 [INFO][4839] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.778 [INFO][4839] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.780 [INFO][4839] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.782 [INFO][4839] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.782 [INFO][4839] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.784 [INFO][4839] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.788 [INFO][4839] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.795 [INFO][4839] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.795 [INFO][4839] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" host="localhost" Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.795 [INFO][4839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:12.821182 containerd[1461]: 2026-01-20 00:54:12.795 [INFO][4839] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" HandleID="k8s-pod-network.80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.823159 containerd[1461]: 2026-01-20 00:54:12.798 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p8vbg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7e620ed7-8827-4f4a-b020-5c5456115c9e", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-p8vbg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali53aff377052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.823159 containerd[1461]: 2026-01-20 00:54:12.799 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.823159 containerd[1461]: 2026-01-20 00:54:12.799 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53aff377052 ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.823159 containerd[1461]: 2026-01-20 00:54:12.801 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.823159 containerd[1461]: 2026-01-20 00:54:12.804 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p8vbg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7e620ed7-8827-4f4a-b020-5c5456115c9e", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af", Pod:"goldmane-666569f655-p8vbg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali53aff377052", MAC:"4e:f3:1d:fb:9e:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:12.823159 containerd[1461]: 2026-01-20 00:54:12.815 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af" Namespace="calico-system" Pod="goldmane-666569f655-p8vbg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:12.826353 containerd[1461]: time="2026-01-20T00:54:12.826325221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dnc6k,Uid:5673b085-3f3e-4250-ba9e-85fa33b4899b,Namespace:kube-system,Attempt:1,} returns sandbox id \"ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b\"" Jan 20 00:54:12.828878 kubelet[2520]: E0120 00:54:12.827550 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:12.834528 containerd[1461]: time="2026-01-20T00:54:12.834335949Z" level=info msg="CreateContainer within sandbox \"ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:54:12.849544 containerd[1461]: time="2026-01-20T00:54:12.849071633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:12.849544 containerd[1461]: time="2026-01-20T00:54:12.849165247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:12.849544 containerd[1461]: time="2026-01-20T00:54:12.849177711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.849544 containerd[1461]: time="2026-01-20T00:54:12.849257841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.852726 containerd[1461]: time="2026-01-20T00:54:12.852702409Z" level=info msg="CreateContainer within sandbox \"ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f607dd6b54caf75088056169f570798486e45486acdc3ccf4415bd4280b5e0de\"" Jan 20 00:54:12.853874 containerd[1461]: time="2026-01-20T00:54:12.853745602Z" level=info msg="StartContainer for \"f607dd6b54caf75088056169f570798486e45486acdc3ccf4415bd4280b5e0de\"" Jan 20 00:54:12.874235 systemd[1]: Started cri-containerd-80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af.scope - libcontainer container 80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af. Jan 20 00:54:12.892272 systemd[1]: Started cri-containerd-f607dd6b54caf75088056169f570798486e45486acdc3ccf4415bd4280b5e0de.scope - libcontainer container f607dd6b54caf75088056169f570798486e45486acdc3ccf4415bd4280b5e0de. Jan 20 00:54:12.897397 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:12.936323 containerd[1461]: time="2026-01-20T00:54:12.936264660Z" level=info msg="StartContainer for \"f607dd6b54caf75088056169f570798486e45486acdc3ccf4415bd4280b5e0de\" returns successfully" Jan 20 00:54:12.936418 containerd[1461]: time="2026-01-20T00:54:12.936356122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p8vbg,Uid:7e620ed7-8827-4f4a-b020-5c5456115c9e,Namespace:calico-system,Attempt:1,} returns sandbox id \"80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af\"" Jan 20 00:54:12.939195 containerd[1461]: time="2026-01-20T00:54:12.939175763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:54:12.960309 systemd-networkd[1384]: cali8fcda4eb047: Gained IPv6LL Jan 20 00:54:12.999922 containerd[1461]: time="2026-01-20T00:54:12.999831782Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:13.001481 containerd[1461]: time="2026-01-20T00:54:13.001357241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:54:13.001481 containerd[1461]: time="2026-01-20T00:54:13.001417491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:13.001715 kubelet[2520]: E0120 00:54:13.001670 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:54:13.001790 kubelet[2520]: E0120 00:54:13.001726 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:54:13.001903 kubelet[2520]: E0120 00:54:13.001854 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p8vbg_calico-system(7e620ed7-8827-4f4a-b020-5c5456115c9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:13.003178 kubelet[2520]: E0120 00:54:13.003135 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:54:13.265625 kubelet[2520]: E0120 00:54:13.265588 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:13.267392 kubelet[2520]: E0120 00:54:13.267357 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:54:13.269408 kubelet[2520]: E0120 00:54:13.269343 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:54:13.272374 kubelet[2520]: E0120 00:54:13.271426 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:13.272989 kubelet[2520]: E0120 00:54:13.272886 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" podUID="38bfb1da-e948-47f5-8ec0-b14e509cc2d8" Jan 20 00:54:13.273352 kubelet[2520]: E0120 00:54:13.273305 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:54:13.294037 kubelet[2520]: I0120 00:54:13.293961 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dnc6k" podStartSLOduration=33.293946447 podStartE2EDuration="33.293946447s" podCreationTimestamp="2026-01-20 00:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:54:13.277280828 +0000 UTC m=+38.270044385" watchObservedRunningTime="2026-01-20 00:54:13.293946447 +0000 UTC m=+38.286709974" Jan 20 00:54:13.664341 systemd-networkd[1384]: cali8914068f4a9: Gained IPv6LL Jan 20 00:54:13.920428 systemd-networkd[1384]: cali0fdaacd9276: Gained IPv6LL Jan 20 00:54:14.240314 systemd-networkd[1384]: cali6a5711dbdf7: Gained IPv6LL Jan 20 00:54:14.240831 systemd-networkd[1384]: cali53aff377052: Gained IPv6LL Jan 20 00:54:14.274747 kubelet[2520]: E0120 00:54:14.274388 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:14.274747 kubelet[2520]: E0120 00:54:14.274709 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:14.274747 kubelet[2520]: E0120 00:54:14.274724 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:54:14.275366 kubelet[2520]: E0120 00:54:14.275187 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" podUID="38bfb1da-e948-47f5-8ec0-b14e509cc2d8" Jan 20 00:54:14.275366 kubelet[2520]: E0120 00:54:14.275255 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:54:15.275878 kubelet[2520]: E0120 00:54:15.275807 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:16.296542 systemd[1]: Started sshd@8-10.0.0.160:22-10.0.0.1:53870.service - OpenSSH per-connection server daemon (10.0.0.1:53870). Jan 20 00:54:16.355152 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 53870 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:16.357126 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:16.362534 systemd-logind[1444]: New session 9 of user core. Jan 20 00:54:16.367243 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:54:16.513414 sshd[5101]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:16.521813 systemd[1]: sshd@8-10.0.0.160:22-10.0.0.1:53870.service: Deactivated successfully. Jan 20 00:54:16.523449 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:54:16.525162 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:54:16.533441 systemd[1]: Started sshd@9-10.0.0.160:22-10.0.0.1:53882.service - OpenSSH per-connection server daemon (10.0.0.1:53882). Jan 20 00:54:16.534919 systemd-logind[1444]: Removed session 9. Jan 20 00:54:16.561021 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 53882 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:16.562807 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:16.567295 systemd-logind[1444]: New session 10 of user core. Jan 20 00:54:16.574254 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:54:16.718883 sshd[5116]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:16.729026 systemd[1]: sshd@9-10.0.0.160:22-10.0.0.1:53882.service: Deactivated successfully. Jan 20 00:54:16.732554 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:54:16.734327 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:54:16.740766 systemd[1]: Started sshd@10-10.0.0.160:22-10.0.0.1:53896.service - OpenSSH per-connection server daemon (10.0.0.1:53896). Jan 20 00:54:16.742172 systemd-logind[1444]: Removed session 10. Jan 20 00:54:16.770203 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 53896 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:16.771828 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:16.777582 systemd-logind[1444]: New session 11 of user core. Jan 20 00:54:16.784231 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:54:16.905137 sshd[5135]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:16.908565 systemd[1]: sshd@10-10.0.0.160:22-10.0.0.1:53896.service: Deactivated successfully. Jan 20 00:54:16.910735 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:54:16.912846 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:54:16.914025 systemd-logind[1444]: Removed session 11. Jan 20 00:54:19.100853 kubelet[2520]: I0120 00:54:19.100787 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:54:19.101540 kubelet[2520]: E0120 00:54:19.101292 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:19.163341 systemd[1]: run-containerd-runc-k8s.io-45c7fc013e1b7716c8dfa8ade80d89be3de3e4f4f432fa13e0907f4ff1551dc5-runc.we4JJK.mount: Deactivated successfully. Jan 20 00:54:19.266318 systemd[1]: run-containerd-runc-k8s.io-45c7fc013e1b7716c8dfa8ade80d89be3de3e4f4f432fa13e0907f4ff1551dc5-runc.FAgCU4.mount: Deactivated successfully. Jan 20 00:54:19.285420 kubelet[2520]: E0120 00:54:19.285358 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:21.100055 containerd[1461]: time="2026-01-20T00:54:21.099977500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:54:21.170123 containerd[1461]: time="2026-01-20T00:54:21.170003671Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:21.171658 containerd[1461]: time="2026-01-20T00:54:21.171575565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:54:21.171712 containerd[1461]: time="2026-01-20T00:54:21.171610994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:54:21.171968 kubelet[2520]: E0120 00:54:21.171918 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:54:21.171968 kubelet[2520]: E0120 00:54:21.171956 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:54:21.172337 kubelet[2520]: E0120 00:54:21.172064 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:df9856728c384f29b8c73d14eea5ef90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4fzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775dbdb97f-rmk2w_calico-system(db92007e-a07a-40c5-aa7a-9a7981e0ad4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:21.175025 containerd[1461]: time="2026-01-20T00:54:21.174977272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:54:21.249337 containerd[1461]: time="2026-01-20T00:54:21.249265471Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:21.250733 containerd[1461]: time="2026-01-20T00:54:21.250673353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:54:21.250815 containerd[1461]: time="2026-01-20T00:54:21.250703849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:54:21.250944 kubelet[2520]: E0120 00:54:21.250885 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:54:21.250944 kubelet[2520]: E0120 00:54:21.250930 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:54:21.251109 kubelet[2520]: E0120 00:54:21.251033 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4fzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775dbdb97f-rmk2w_calico-system(db92007e-a07a-40c5-aa7a-9a7981e0ad4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:21.252288 kubelet[2520]: E0120 00:54:21.252224 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775dbdb97f-rmk2w" podUID="db92007e-a07a-40c5-aa7a-9a7981e0ad4e" Jan 20 00:54:21.916205 systemd[1]: Started sshd@11-10.0.0.160:22-10.0.0.1:53910.service - OpenSSH per-connection server daemon (10.0.0.1:53910). Jan 20 00:54:21.950501 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 53910 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:21.951935 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:21.956457 systemd-logind[1444]: New session 12 of user core. Jan 20 00:54:21.966223 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:54:22.077508 sshd[5209]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:22.081521 systemd[1]: sshd@11-10.0.0.160:22-10.0.0.1:53910.service: Deactivated successfully. Jan 20 00:54:22.083471 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:54:22.084206 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:54:22.085299 systemd-logind[1444]: Removed session 12. Jan 20 00:54:24.098625 containerd[1461]: time="2026-01-20T00:54:24.098579512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:54:24.155826 containerd[1461]: time="2026-01-20T00:54:24.155792024Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:24.157199 containerd[1461]: time="2026-01-20T00:54:24.157119922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:54:24.157199 containerd[1461]: time="2026-01-20T00:54:24.157183942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:54:24.157353 kubelet[2520]: E0120 00:54:24.157309 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:24.157691 kubelet[2520]: E0120 00:54:24.157363 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:24.157691 kubelet[2520]: E0120 00:54:24.157488 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24hnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:24.159722 containerd[1461]: time="2026-01-20T00:54:24.159674034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:54:24.226666 containerd[1461]: time="2026-01-20T00:54:24.226608029Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:24.228126 containerd[1461]: time="2026-01-20T00:54:24.228023370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:54:24.228219 containerd[1461]: time="2026-01-20T00:54:24.228125200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:54:24.228336 kubelet[2520]: E0120 00:54:24.228283 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:24.228336 kubelet[2520]: E0120 00:54:24.228333 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:24.228479 kubelet[2520]: E0120 00:54:24.228427 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24hnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:24.229827 kubelet[2520]: E0120 00:54:24.229708 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:54:25.099136 containerd[1461]: time="2026-01-20T00:54:25.098801080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:54:25.161111 containerd[1461]: time="2026-01-20T00:54:25.161007156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:25.162354 containerd[1461]: time="2026-01-20T00:54:25.162297864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:54:25.162387 containerd[1461]: time="2026-01-20T00:54:25.162364970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:25.162486 kubelet[2520]: E0120 00:54:25.162442 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:25.162486 kubelet[2520]: E0120 00:54:25.162488 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:25.162962 kubelet[2520]: E0120 00:54:25.162613 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wt2xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54d67dfb4-4n657_calico-apiserver(0d6351d0-021b-40ba-9cae-6912429b9dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:25.163836 kubelet[2520]: E0120 00:54:25.163795 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:54:26.099016 containerd[1461]: time="2026-01-20T00:54:26.098964550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:54:26.156381 containerd[1461]: time="2026-01-20T00:54:26.156320095Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:26.157505 containerd[1461]: time="2026-01-20T00:54:26.157444750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:54:26.157605 containerd[1461]: time="2026-01-20T00:54:26.157478615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:26.157631 kubelet[2520]: E0120 00:54:26.157571 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:54:26.157631 kubelet[2520]: E0120 00:54:26.157602 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:54:26.157783 kubelet[2520]: E0120 00:54:26.157734 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p8vbg_calico-system(7e620ed7-8827-4f4a-b020-5c5456115c9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:26.159142 kubelet[2520]: E0120 00:54:26.158992 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:54:27.093462 systemd[1]: Started sshd@12-10.0.0.160:22-10.0.0.1:56888.service - OpenSSH per-connection server daemon (10.0.0.1:56888). Jan 20 00:54:27.098833 containerd[1461]: time="2026-01-20T00:54:27.098714008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:54:27.126181 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 56888 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:27.127760 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:27.132255 systemd-logind[1444]: New session 13 of user core. Jan 20 00:54:27.143334 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:54:27.156536 containerd[1461]: time="2026-01-20T00:54:27.156487075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:27.157832 containerd[1461]: time="2026-01-20T00:54:27.157715637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:54:27.157832 containerd[1461]: time="2026-01-20T00:54:27.157739079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:27.158022 kubelet[2520]: E0120 00:54:27.157960 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:27.158022 kubelet[2520]: E0120 00:54:27.158012 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:27.158377 kubelet[2520]: E0120 00:54:27.158156 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gxl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54d67dfb4-f8mz9_calico-apiserver(84cb4cb2-d928-4fed-bf18-3918ea335ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:27.159708 kubelet[2520]: E0120 00:54:27.159631 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:54:27.252598 sshd[5226]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:27.256462 systemd[1]: sshd@12-10.0.0.160:22-10.0.0.1:56888.service: Deactivated successfully. Jan 20 00:54:27.258266 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:54:27.258911 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:54:27.259999 systemd-logind[1444]: Removed session 13. Jan 20 00:54:28.098621 containerd[1461]: time="2026-01-20T00:54:28.098364545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:54:28.156677 containerd[1461]: time="2026-01-20T00:54:28.156622893Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:28.158120 containerd[1461]: time="2026-01-20T00:54:28.158031172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:54:28.158237 containerd[1461]: time="2026-01-20T00:54:28.158125808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:54:28.158302 kubelet[2520]: E0120 00:54:28.158259 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:54:28.158599 kubelet[2520]: E0120 00:54:28.158304 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:54:28.158599 kubelet[2520]: E0120 00:54:28.158446 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9kjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6667b974c7-vm9zj_calico-system(38bfb1da-e948-47f5-8ec0-b14e509cc2d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:28.159767 kubelet[2520]: E0120 00:54:28.159731 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" podUID="38bfb1da-e948-47f5-8ec0-b14e509cc2d8" Jan 20 00:54:32.264376 systemd[1]: Started sshd@13-10.0.0.160:22-10.0.0.1:56894.service - OpenSSH per-connection server daemon (10.0.0.1:56894). Jan 20 00:54:32.297132 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 56894 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:32.299293 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:32.304046 systemd-logind[1444]: New session 14 of user core. Jan 20 00:54:32.313346 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:54:32.430887 sshd[5247]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:32.435385 systemd[1]: sshd@13-10.0.0.160:22-10.0.0.1:56894.service: Deactivated successfully. Jan 20 00:54:32.437752 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:54:32.438813 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:54:32.440049 systemd-logind[1444]: Removed session 14. Jan 20 00:54:34.099280 kubelet[2520]: E0120 00:54:34.099204 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775dbdb97f-rmk2w" podUID="db92007e-a07a-40c5-aa7a-9a7981e0ad4e" Jan 20 00:54:35.088137 containerd[1461]: time="2026-01-20T00:54:35.088038318Z" level=info msg="StopPodSandbox for \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\"" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.146 [WARNING][5272] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hm58c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f383b2b-693c-42c3-b0a3-10cbb7e70071", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed", Pod:"csi-node-driver-hm58c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a9746247c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.147 [INFO][5272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.147 [INFO][5272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" iface="eth0" netns="" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.147 [INFO][5272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.147 [INFO][5272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.176 [INFO][5282] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.176 [INFO][5282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.176 [INFO][5282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.181 [WARNING][5282] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.181 [INFO][5282] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.183 [INFO][5282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.188286 containerd[1461]: 2026-01-20 00:54:35.185 [INFO][5272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.188708 containerd[1461]: time="2026-01-20T00:54:35.188319214Z" level=info msg="TearDown network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\" successfully" Jan 20 00:54:35.188708 containerd[1461]: time="2026-01-20T00:54:35.188339442Z" level=info msg="StopPodSandbox for \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\" returns successfully" Jan 20 00:54:35.189139 containerd[1461]: time="2026-01-20T00:54:35.189020644Z" level=info msg="RemovePodSandbox for \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\"" Jan 20 00:54:35.190862 containerd[1461]: time="2026-01-20T00:54:35.190813883Z" level=info msg="Forcibly stopping sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\"" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.226 [WARNING][5300] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hm58c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f383b2b-693c-42c3-b0a3-10cbb7e70071", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaab0584ddc0f37cb4d9277c8c10a7f597aa8e60ee314693416ac712ce4a27ed", Pod:"csi-node-driver-hm58c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a9746247c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.226 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.226 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" iface="eth0" netns="" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.226 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.226 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.249 [INFO][5309] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.250 [INFO][5309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.250 [INFO][5309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.257 [WARNING][5309] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.257 [INFO][5309] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" HandleID="k8s-pod-network.271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Workload="localhost-k8s-csi--node--driver--hm58c-eth0" Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.262 [INFO][5309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.269384 containerd[1461]: 2026-01-20 00:54:35.266 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea" Jan 20 00:54:35.269384 containerd[1461]: time="2026-01-20T00:54:35.268513623Z" level=info msg="TearDown network for sandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\" successfully" Jan 20 00:54:35.278862 containerd[1461]: time="2026-01-20T00:54:35.278787636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:35.278862 containerd[1461]: time="2026-01-20T00:54:35.278857186Z" level=info msg="RemovePodSandbox \"271441c20a87d5dbb65d8a0d9c0563df5490443cc71aacf3ec95166277de16ea\" returns successfully" Jan 20 00:54:35.279510 containerd[1461]: time="2026-01-20T00:54:35.279462029Z" level=info msg="StopPodSandbox for \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\"" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.315 [WARNING][5327] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p8vbg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7e620ed7-8827-4f4a-b020-5c5456115c9e", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af", Pod:"goldmane-666569f655-p8vbg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali53aff377052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.316 [INFO][5327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.316 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" iface="eth0" netns="" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.316 [INFO][5327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.317 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.335 [INFO][5336] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.335 [INFO][5336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.335 [INFO][5336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.342 [WARNING][5336] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.342 [INFO][5336] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.344 [INFO][5336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.349149 containerd[1461]: 2026-01-20 00:54:35.346 [INFO][5327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.349149 containerd[1461]: time="2026-01-20T00:54:35.348841093Z" level=info msg="TearDown network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\" successfully" Jan 20 00:54:35.349149 containerd[1461]: time="2026-01-20T00:54:35.348865078Z" level=info msg="StopPodSandbox for \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\" returns successfully" Jan 20 00:54:35.349628 containerd[1461]: time="2026-01-20T00:54:35.349299945Z" level=info msg="RemovePodSandbox for \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\"" Jan 20 00:54:35.349628 containerd[1461]: time="2026-01-20T00:54:35.349331564Z" level=info msg="Forcibly stopping sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\"" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.386 [WARNING][5354] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p8vbg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7e620ed7-8827-4f4a-b020-5c5456115c9e", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80af2ae1b728c0e3d59b4823a135e434c6f9b1cc6bb56350523149479b8de1af", Pod:"goldmane-666569f655-p8vbg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali53aff377052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.386 [INFO][5354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.386 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" iface="eth0" netns="" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.386 [INFO][5354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.386 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.414 [INFO][5363] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.415 [INFO][5363] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.415 [INFO][5363] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.420 [WARNING][5363] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.420 [INFO][5363] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" HandleID="k8s-pod-network.4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Workload="localhost-k8s-goldmane--666569f655--p8vbg-eth0" Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.421 [INFO][5363] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.426595 containerd[1461]: 2026-01-20 00:54:35.423 [INFO][5354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a" Jan 20 00:54:35.426988 containerd[1461]: time="2026-01-20T00:54:35.426629602Z" level=info msg="TearDown network for sandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\" successfully" Jan 20 00:54:35.430855 containerd[1461]: time="2026-01-20T00:54:35.430788963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:35.430855 containerd[1461]: time="2026-01-20T00:54:35.430860787Z" level=info msg="RemovePodSandbox \"4ff9f5a89fb39c444a2bb6f3cb79c988163d16dc621489999daf926c8d0c0e7a\" returns successfully" Jan 20 00:54:35.431441 containerd[1461]: time="2026-01-20T00:54:35.431371431Z" level=info msg="StopPodSandbox for \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\"" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.466 [WARNING][5380] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5673b085-3f3e-4250-ba9e-85fa33b4899b", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b", Pod:"coredns-674b8bbfcf-dnc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5711dbdf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.466 [INFO][5380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.466 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" iface="eth0" netns="" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.466 [INFO][5380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.466 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.490 [INFO][5389] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.490 [INFO][5389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.490 [INFO][5389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.497 [WARNING][5389] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.497 [INFO][5389] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.499 [INFO][5389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.503609 containerd[1461]: 2026-01-20 00:54:35.501 [INFO][5380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.503609 containerd[1461]: time="2026-01-20T00:54:35.503577856Z" level=info msg="TearDown network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\" successfully" Jan 20 00:54:35.503609 containerd[1461]: time="2026-01-20T00:54:35.503604685Z" level=info msg="StopPodSandbox for \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\" returns successfully" Jan 20 00:54:35.504266 containerd[1461]: time="2026-01-20T00:54:35.504241660Z" level=info msg="RemovePodSandbox for \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\"" Jan 20 00:54:35.504308 containerd[1461]: time="2026-01-20T00:54:35.504273680Z" level=info msg="Forcibly stopping sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\"" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.537 [WARNING][5408] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5673b085-3f3e-4250-ba9e-85fa33b4899b", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ced67c499e89c6823889e89788c86eceb6576400d28d0d5c70f737c33ff9b50b", Pod:"coredns-674b8bbfcf-dnc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a5711dbdf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.538 [INFO][5408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.538 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" iface="eth0" netns="" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.538 [INFO][5408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.538 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.561 [INFO][5417] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.561 [INFO][5417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.562 [INFO][5417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.569 [WARNING][5417] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.569 [INFO][5417] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" HandleID="k8s-pod-network.bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Workload="localhost-k8s-coredns--674b8bbfcf--dnc6k-eth0" Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.571 [INFO][5417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.575696 containerd[1461]: 2026-01-20 00:54:35.573 [INFO][5408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01" Jan 20 00:54:35.576150 containerd[1461]: time="2026-01-20T00:54:35.575754203Z" level=info msg="TearDown network for sandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\" successfully" Jan 20 00:54:35.581731 containerd[1461]: time="2026-01-20T00:54:35.581653357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:35.581814 containerd[1461]: time="2026-01-20T00:54:35.581735060Z" level=info msg="RemovePodSandbox \"bba599e402a4a816c9fc9fcb12687b6dc1be3bb65e88d2db2f6e75059da47a01\" returns successfully" Jan 20 00:54:35.582406 containerd[1461]: time="2026-01-20T00:54:35.582315195Z" level=info msg="StopPodSandbox for \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\"" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.614 [WARNING][5435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0", GenerateName:"calico-kube-controllers-6667b974c7-", Namespace:"calico-system", SelfLink:"", UID:"38bfb1da-e948-47f5-8ec0-b14e509cc2d8", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6667b974c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50", Pod:"calico-kube-controllers-6667b974c7-vm9zj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8914068f4a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.614 [INFO][5435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.614 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" iface="eth0" netns="" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.614 [INFO][5435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.614 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.637 [INFO][5443] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.637 [INFO][5443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.638 [INFO][5443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.643 [WARNING][5443] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.643 [INFO][5443] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.644 [INFO][5443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.649576 containerd[1461]: 2026-01-20 00:54:35.647 [INFO][5435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.649576 containerd[1461]: time="2026-01-20T00:54:35.649524772Z" level=info msg="TearDown network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\" successfully" Jan 20 00:54:35.649576 containerd[1461]: time="2026-01-20T00:54:35.649548647Z" level=info msg="StopPodSandbox for \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\" returns successfully" Jan 20 00:54:35.651280 containerd[1461]: time="2026-01-20T00:54:35.651201709Z" level=info msg="RemovePodSandbox for \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\"" Jan 20 00:54:35.651280 containerd[1461]: time="2026-01-20T00:54:35.651252203Z" level=info msg="Forcibly stopping sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\"" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.692 [WARNING][5461] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0", GenerateName:"calico-kube-controllers-6667b974c7-", Namespace:"calico-system", SelfLink:"", UID:"38bfb1da-e948-47f5-8ec0-b14e509cc2d8", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6667b974c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0783576f72cfa280cf875e8857237531dfae35b76d0931729037d58533e53b50", Pod:"calico-kube-controllers-6667b974c7-vm9zj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8914068f4a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.693 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.693 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" iface="eth0" netns="" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.693 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.693 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.716 [INFO][5469] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.716 [INFO][5469] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.716 [INFO][5469] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.722 [WARNING][5469] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.722 [INFO][5469] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" HandleID="k8s-pod-network.391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Workload="localhost-k8s-calico--kube--controllers--6667b974c7--vm9zj-eth0" Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.723 [INFO][5469] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.728851 containerd[1461]: 2026-01-20 00:54:35.726 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58" Jan 20 00:54:35.729305 containerd[1461]: time="2026-01-20T00:54:35.728893400Z" level=info msg="TearDown network for sandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\" successfully" Jan 20 00:54:35.733258 containerd[1461]: time="2026-01-20T00:54:35.733218315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:35.733343 containerd[1461]: time="2026-01-20T00:54:35.733272796Z" level=info msg="RemovePodSandbox \"391bc290ede3d02c8b7b335e929003d3abeb1e7bd095383ccf9daac2bef54c58\" returns successfully" Jan 20 00:54:35.733953 containerd[1461]: time="2026-01-20T00:54:35.733917054Z" level=info msg="StopPodSandbox for \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\"" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.771 [WARNING][5486] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" WorkloadEndpoint="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.771 [INFO][5486] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.771 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" iface="eth0" netns="" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.771 [INFO][5486] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.771 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.791 [INFO][5495] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.791 [INFO][5495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.791 [INFO][5495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.799 [WARNING][5495] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.799 [INFO][5495] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.802 [INFO][5495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.807810 containerd[1461]: 2026-01-20 00:54:35.805 [INFO][5486] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.807810 containerd[1461]: time="2026-01-20T00:54:35.807795257Z" level=info msg="TearDown network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\" successfully" Jan 20 00:54:35.807810 containerd[1461]: time="2026-01-20T00:54:35.807821005Z" level=info msg="StopPodSandbox for \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\" returns successfully" Jan 20 00:54:35.808437 containerd[1461]: time="2026-01-20T00:54:35.808402037Z" level=info msg="RemovePodSandbox for \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\"" Jan 20 00:54:35.808469 containerd[1461]: time="2026-01-20T00:54:35.808442292Z" level=info msg="Forcibly stopping sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\"" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.842 [WARNING][5516] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" WorkloadEndpoint="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.842 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.842 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" iface="eth0" netns="" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.842 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.842 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.867 [INFO][5524] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.867 [INFO][5524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.867 [INFO][5524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.872 [WARNING][5524] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.872 [INFO][5524] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" HandleID="k8s-pod-network.7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Workload="localhost-k8s-whisker--6f9877bdc9--7cbwv-eth0" Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.875 [INFO][5524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.884163 containerd[1461]: 2026-01-20 00:54:35.877 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a" Jan 20 00:54:35.884473 containerd[1461]: time="2026-01-20T00:54:35.884200709Z" level=info msg="TearDown network for sandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\" successfully" Jan 20 00:54:35.888323 containerd[1461]: time="2026-01-20T00:54:35.888247108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:35.888323 containerd[1461]: time="2026-01-20T00:54:35.888319414Z" level=info msg="RemovePodSandbox \"7c2a4faa2ebb5857de81887ddf76b6eb8f8ab3c2df4cfb6f9bbfcfab1afcd10a\" returns successfully" Jan 20 00:54:35.888863 containerd[1461]: time="2026-01-20T00:54:35.888835108Z" level=info msg="StopPodSandbox for \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\"" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.926 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--d8bck-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8996315-c1bc-44a5-b42d-133ff549c4ad", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae", Pod:"coredns-674b8bbfcf-d8bck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fcda4eb047", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.928 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.928 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" iface="eth0" netns="" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.928 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.928 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.954 [INFO][5551] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.955 [INFO][5551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.955 [INFO][5551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.960 [WARNING][5551] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.961 [INFO][5551] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.962 [INFO][5551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:35.966959 containerd[1461]: 2026-01-20 00:54:35.964 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:35.967715 containerd[1461]: time="2026-01-20T00:54:35.967550682Z" level=info msg="TearDown network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\" successfully" Jan 20 00:54:35.967715 containerd[1461]: time="2026-01-20T00:54:35.967579115Z" level=info msg="StopPodSandbox for \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\" returns successfully" Jan 20 00:54:35.968916 containerd[1461]: time="2026-01-20T00:54:35.968564895Z" level=info msg="RemovePodSandbox for \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\"" Jan 20 00:54:35.968916 containerd[1461]: time="2026-01-20T00:54:35.968601274Z" level=info msg="Forcibly stopping sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\"" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.010 [WARNING][5567] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--d8bck-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8996315-c1bc-44a5-b42d-133ff549c4ad", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5873c32f3face39d46d744f315919468bc5cd37e50237d2004e45f399e6cacae", Pod:"coredns-674b8bbfcf-d8bck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fcda4eb047", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.010 [INFO][5567] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.010 [INFO][5567] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" iface="eth0" netns="" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.010 [INFO][5567] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.010 [INFO][5567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.058 [INFO][5575] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.059 [INFO][5575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.059 [INFO][5575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.064 [WARNING][5575] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.064 [INFO][5575] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" HandleID="k8s-pod-network.82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Workload="localhost-k8s-coredns--674b8bbfcf--d8bck-eth0" Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.066 [INFO][5575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:36.071140 containerd[1461]: 2026-01-20 00:54:36.068 [INFO][5567] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0" Jan 20 00:54:36.071140 containerd[1461]: time="2026-01-20T00:54:36.071110770Z" level=info msg="TearDown network for sandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\" successfully" Jan 20 00:54:36.075245 containerd[1461]: time="2026-01-20T00:54:36.075214302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:36.075330 containerd[1461]: time="2026-01-20T00:54:36.075261942Z" level=info msg="RemovePodSandbox \"82ad14f9bfd35a4746a6ae380f4a4490228d6b2c3b948fa04fde41825466ccc0\" returns successfully" Jan 20 00:54:36.075862 containerd[1461]: time="2026-01-20T00:54:36.075807590Z" level=info msg="StopPodSandbox for \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\"" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.112 [WARNING][5592] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"84cb4cb2-d928-4fed-bf18-3918ea335ce0", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e", Pod:"calico-apiserver-54d67dfb4-f8mz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fdaacd9276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.112 [INFO][5592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.112 [INFO][5592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" iface="eth0" netns="" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.112 [INFO][5592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.112 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.135 [INFO][5602] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.135 [INFO][5602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.135 [INFO][5602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.141 [WARNING][5602] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.141 [INFO][5602] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.142 [INFO][5602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:36.147707 containerd[1461]: 2026-01-20 00:54:36.145 [INFO][5592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.148506 containerd[1461]: time="2026-01-20T00:54:36.147735797Z" level=info msg="TearDown network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\" successfully" Jan 20 00:54:36.148506 containerd[1461]: time="2026-01-20T00:54:36.147759992Z" level=info msg="StopPodSandbox for \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\" returns successfully" Jan 20 00:54:36.148506 containerd[1461]: time="2026-01-20T00:54:36.148342824Z" level=info msg="RemovePodSandbox for \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\"" Jan 20 00:54:36.148506 containerd[1461]: time="2026-01-20T00:54:36.148377059Z" level=info msg="Forcibly stopping sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\"" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.183 [WARNING][5620] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"84cb4cb2-d928-4fed-bf18-3918ea335ce0", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cef38f3c6060fc3ec5cf475f0d1ae677552134bd99d0c9e1532b23ad4c8d97e", Pod:"calico-apiserver-54d67dfb4-f8mz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fdaacd9276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.184 [INFO][5620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.184 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" iface="eth0" netns="" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.184 [INFO][5620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.184 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.202 [INFO][5629] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.202 [INFO][5629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.202 [INFO][5629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.207 [WARNING][5629] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.208 [INFO][5629] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" HandleID="k8s-pod-network.93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Workload="localhost-k8s-calico--apiserver--54d67dfb4--f8mz9-eth0" Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.209 [INFO][5629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:36.214415 containerd[1461]: 2026-01-20 00:54:36.211 [INFO][5620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1" Jan 20 00:54:36.214415 containerd[1461]: time="2026-01-20T00:54:36.214396666Z" level=info msg="TearDown network for sandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\" successfully" Jan 20 00:54:36.218415 containerd[1461]: time="2026-01-20T00:54:36.218260621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:36.218415 containerd[1461]: time="2026-01-20T00:54:36.218353584Z" level=info msg="RemovePodSandbox \"93a9e4e0fdbcb730ff4b4af7415edd04f5896ca9d95d161bf11131a082e0fee1\" returns successfully" Jan 20 00:54:36.219724 containerd[1461]: time="2026-01-20T00:54:36.218987774Z" level=info msg="StopPodSandbox for \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\"" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.254 [WARNING][5646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6351d0-021b-40ba-9cae-6912429b9dd9", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d", Pod:"calico-apiserver-54d67dfb4-4n657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1b89a70390", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.254 [INFO][5646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.254 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" iface="eth0" netns="" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.254 [INFO][5646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.254 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.276 [INFO][5654] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.276 [INFO][5654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.276 [INFO][5654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.281 [WARNING][5654] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.281 [INFO][5654] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.283 [INFO][5654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:36.287647 containerd[1461]: 2026-01-20 00:54:36.285 [INFO][5646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.287647 containerd[1461]: time="2026-01-20T00:54:36.287640706Z" level=info msg="TearDown network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\" successfully" Jan 20 00:54:36.287647 containerd[1461]: time="2026-01-20T00:54:36.287696591Z" level=info msg="StopPodSandbox for \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\" returns successfully" Jan 20 00:54:36.288268 containerd[1461]: time="2026-01-20T00:54:36.288208447Z" level=info msg="RemovePodSandbox for \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\"" Jan 20 00:54:36.288268 containerd[1461]: time="2026-01-20T00:54:36.288253551Z" level=info msg="Forcibly stopping sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\"" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.328 [WARNING][5671] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0", GenerateName:"calico-apiserver-54d67dfb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6351d0-021b-40ba-9cae-6912429b9dd9", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d67dfb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68303a4cb5988fe064226386a553c0a735cb84421b18ada6456a185b02580e9d", Pod:"calico-apiserver-54d67dfb4-4n657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1b89a70390", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.328 [INFO][5671] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.328 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" iface="eth0" netns="" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.328 [INFO][5671] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.328 [INFO][5671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.361 [INFO][5680] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.361 [INFO][5680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.361 [INFO][5680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.367 [WARNING][5680] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.367 [INFO][5680] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" HandleID="k8s-pod-network.c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Workload="localhost-k8s-calico--apiserver--54d67dfb4--4n657-eth0" Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.378 [INFO][5680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:36.387240 containerd[1461]: 2026-01-20 00:54:36.382 [INFO][5671] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44" Jan 20 00:54:36.389323 containerd[1461]: time="2026-01-20T00:54:36.387727331Z" level=info msg="TearDown network for sandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\" successfully" Jan 20 00:54:36.391690 containerd[1461]: time="2026-01-20T00:54:36.391645717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:54:36.391787 containerd[1461]: time="2026-01-20T00:54:36.391772463Z" level=info msg="RemovePodSandbox \"c689b3b4d950ab30424aa26f92f9dd96803741d4851f281fe8844fcb35dabd44\" returns successfully" Jan 20 00:54:37.101033 kubelet[2520]: E0120 00:54:37.100919 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:54:37.456354 systemd[1]: Started sshd@14-10.0.0.160:22-10.0.0.1:47948.service - OpenSSH per-connection server daemon (10.0.0.1:47948). Jan 20 00:54:37.514419 sshd[5688]: Accepted publickey for core from 10.0.0.1 port 47948 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:37.515057 sshd[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:37.522458 systemd-logind[1444]: New session 15 of user core. Jan 20 00:54:37.531238 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:54:37.659739 sshd[5688]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:37.666857 systemd[1]: sshd@14-10.0.0.160:22-10.0.0.1:47948.service: Deactivated successfully. Jan 20 00:54:37.668581 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:54:37.670139 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:54:37.677345 systemd[1]: Started sshd@15-10.0.0.160:22-10.0.0.1:47956.service - OpenSSH per-connection server daemon (10.0.0.1:47956). Jan 20 00:54:37.678443 systemd-logind[1444]: Removed session 15. Jan 20 00:54:37.704997 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 47956 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:37.706789 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:37.711160 systemd-logind[1444]: New session 16 of user core. Jan 20 00:54:37.717228 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:54:37.976468 sshd[5702]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:37.988433 systemd[1]: sshd@15-10.0.0.160:22-10.0.0.1:47956.service: Deactivated successfully. Jan 20 00:54:37.990053 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:54:37.992494 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:54:38.001779 systemd[1]: Started sshd@16-10.0.0.160:22-10.0.0.1:47966.service - OpenSSH per-connection server daemon (10.0.0.1:47966). Jan 20 00:54:38.003167 systemd-logind[1444]: Removed session 16. Jan 20 00:54:38.031695 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 47966 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:38.033346 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:38.037955 systemd-logind[1444]: New session 17 of user core. Jan 20 00:54:38.044225 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:54:38.583816 sshd[5714]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:38.595655 systemd[1]: Started sshd@17-10.0.0.160:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Jan 20 00:54:38.596253 systemd[1]: sshd@16-10.0.0.160:22-10.0.0.1:47966.service: Deactivated successfully. Jan 20 00:54:38.597869 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:54:38.601196 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:54:38.608720 systemd-logind[1444]: Removed session 17. Jan 20 00:54:38.659774 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:38.662508 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:38.669654 systemd-logind[1444]: New session 18 of user core. Jan 20 00:54:38.679202 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:54:38.891862 sshd[5731]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:38.899960 systemd[1]: sshd@17-10.0.0.160:22-10.0.0.1:47972.service: Deactivated successfully. Jan 20 00:54:38.902145 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:54:38.903870 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:54:38.915767 systemd[1]: Started sshd@18-10.0.0.160:22-10.0.0.1:47976.service - OpenSSH per-connection server daemon (10.0.0.1:47976). Jan 20 00:54:38.916838 systemd-logind[1444]: Removed session 18. Jan 20 00:54:38.942642 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 47976 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:38.944329 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:38.949611 systemd-logind[1444]: New session 19 of user core. Jan 20 00:54:38.957247 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:54:39.074266 sshd[5746]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:39.078013 systemd[1]: sshd@18-10.0.0.160:22-10.0.0.1:47976.service: Deactivated successfully. Jan 20 00:54:39.080497 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:54:39.081177 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:54:39.082291 systemd-logind[1444]: Removed session 19. Jan 20 00:54:39.098599 kubelet[2520]: E0120 00:54:39.098436 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:54:39.098599 kubelet[2520]: E0120 00:54:39.098436 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:54:42.098210 kubelet[2520]: E0120 00:54:42.098144 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:54:42.098210 kubelet[2520]: E0120 00:54:42.098141 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6667b974c7-vm9zj" podUID="38bfb1da-e948-47f5-8ec0-b14e509cc2d8" Jan 20 00:54:44.097412 systemd[1]: Started sshd@19-10.0.0.160:22-10.0.0.1:49332.service - OpenSSH per-connection server daemon (10.0.0.1:49332). Jan 20 00:54:44.126619 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 49332 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:44.128517 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:44.132882 systemd-logind[1444]: New session 20 of user core. Jan 20 00:54:44.137246 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:54:44.242717 sshd[5764]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:44.246494 systemd[1]: sshd@19-10.0.0.160:22-10.0.0.1:49332.service: Deactivated successfully. Jan 20 00:54:44.248323 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:54:44.249028 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:54:44.250301 systemd-logind[1444]: Removed session 20. Jan 20 00:54:48.099715 containerd[1461]: time="2026-01-20T00:54:48.099232750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:54:48.176923 containerd[1461]: time="2026-01-20T00:54:48.176852847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:48.178468 containerd[1461]: time="2026-01-20T00:54:48.178408178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:54:48.178540 containerd[1461]: time="2026-01-20T00:54:48.178479521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:54:48.178781 kubelet[2520]: E0120 00:54:48.178713 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:54:48.178781 kubelet[2520]: E0120 00:54:48.178777 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:54:48.179424 kubelet[2520]: E0120 00:54:48.178907 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:df9856728c384f29b8c73d14eea5ef90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4fzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775dbdb97f-rmk2w_calico-system(db92007e-a07a-40c5-aa7a-9a7981e0ad4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:48.181558 containerd[1461]: time="2026-01-20T00:54:48.181430504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:54:48.244929 containerd[1461]: time="2026-01-20T00:54:48.244866218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:48.246295 containerd[1461]: time="2026-01-20T00:54:48.246213119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:54:48.246411 containerd[1461]: time="2026-01-20T00:54:48.246283608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:54:48.246503 kubelet[2520]: E0120 00:54:48.246443 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:54:48.246503 kubelet[2520]: E0120 00:54:48.246484 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:54:48.246706 kubelet[2520]: E0120 00:54:48.246603 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4fzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775dbdb97f-rmk2w_calico-system(db92007e-a07a-40c5-aa7a-9a7981e0ad4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:48.248780 kubelet[2520]: E0120 00:54:48.248704 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775dbdb97f-rmk2w" podUID="db92007e-a07a-40c5-aa7a-9a7981e0ad4e" Jan 20 00:54:49.258416 systemd[1]: Started sshd@20-10.0.0.160:22-10.0.0.1:49346.service - OpenSSH per-connection server daemon (10.0.0.1:49346). Jan 20 00:54:49.306775 sshd[5788]: Accepted publickey for core from 10.0.0.1 port 49346 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:49.308683 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:49.317339 systemd-logind[1444]: New session 21 of user core. Jan 20 00:54:49.321216 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:54:49.448717 sshd[5788]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:49.453069 systemd[1]: sshd@20-10.0.0.160:22-10.0.0.1:49346.service: Deactivated successfully. Jan 20 00:54:49.455819 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:54:49.456710 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:54:49.457997 systemd-logind[1444]: Removed session 21. Jan 20 00:54:50.100276 containerd[1461]: time="2026-01-20T00:54:50.099470584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:54:50.156740 containerd[1461]: time="2026-01-20T00:54:50.156547877Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:50.157827 containerd[1461]: time="2026-01-20T00:54:50.157726589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:54:50.157827 containerd[1461]: time="2026-01-20T00:54:50.157801699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:50.158049 kubelet[2520]: E0120 00:54:50.157989 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:50.158367 kubelet[2520]: E0120 00:54:50.158040 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:50.158989 kubelet[2520]: E0120 00:54:50.158600 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wt2xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54d67dfb4-4n657_calico-apiserver(0d6351d0-021b-40ba-9cae-6912429b9dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:50.160703 kubelet[2520]: E0120 00:54:50.159921 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-4n657" podUID="0d6351d0-021b-40ba-9cae-6912429b9dd9" Jan 20 00:54:52.099230 containerd[1461]: time="2026-01-20T00:54:52.099167403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:54:52.167850 containerd[1461]: time="2026-01-20T00:54:52.167784544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:52.169157 containerd[1461]: time="2026-01-20T00:54:52.169120105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:54:52.169263 containerd[1461]: time="2026-01-20T00:54:52.169201948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:54:52.169404 kubelet[2520]: E0120 00:54:52.169347 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:52.169404 kubelet[2520]: E0120 00:54:52.169395 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:52.169763 kubelet[2520]: E0120 00:54:52.169593 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24hnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:52.170160 containerd[1461]: time="2026-01-20T00:54:52.170128184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:54:52.226896 containerd[1461]: time="2026-01-20T00:54:52.226829493Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:52.228053 containerd[1461]: time="2026-01-20T00:54:52.227978956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:54:52.228053 containerd[1461]: time="2026-01-20T00:54:52.228007610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:52.228355 kubelet[2520]: E0120 00:54:52.228273 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:54:52.228355 kubelet[2520]: E0120 00:54:52.228320 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:54:52.228595 kubelet[2520]: E0120 00:54:52.228492 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p8vbg_calico-system(7e620ed7-8827-4f4a-b020-5c5456115c9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:52.228728 containerd[1461]: time="2026-01-20T00:54:52.228696535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:54:52.230528 kubelet[2520]: E0120 00:54:52.230325 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p8vbg" podUID="7e620ed7-8827-4f4a-b020-5c5456115c9e" Jan 20 00:54:52.292918 containerd[1461]: time="2026-01-20T00:54:52.292874147Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:52.294188 containerd[1461]: time="2026-01-20T00:54:52.294054588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:54:52.294188 containerd[1461]: time="2026-01-20T00:54:52.294145739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:54:52.294386 kubelet[2520]: E0120 00:54:52.294340 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:52.294492 kubelet[2520]: E0120 00:54:52.294411 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:52.294577 kubelet[2520]: E0120 00:54:52.294516 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24hnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hm58c_calico-system(2f383b2b-693c-42c3-b0a3-10cbb7e70071): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:52.295998 kubelet[2520]: E0120 00:54:52.295950 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hm58c" podUID="2f383b2b-693c-42c3-b0a3-10cbb7e70071" Jan 20 00:54:54.099046 containerd[1461]: time="2026-01-20T00:54:54.098941632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:54:54.158205 containerd[1461]: time="2026-01-20T00:54:54.157004866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:54.158567 containerd[1461]: time="2026-01-20T00:54:54.158430395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:54:54.158567 containerd[1461]: time="2026-01-20T00:54:54.158461684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:54:54.158841 kubelet[2520]: E0120 00:54:54.158732 2520 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:54.159544 kubelet[2520]: E0120 00:54:54.158855 2520 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:54:54.159544 kubelet[2520]: E0120 00:54:54.159037 2520 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gxl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54d67dfb4-f8mz9_calico-apiserver(84cb4cb2-d928-4fed-bf18-3918ea335ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:54.160430 kubelet[2520]: E0120 00:54:54.160362 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54d67dfb4-f8mz9" podUID="84cb4cb2-d928-4fed-bf18-3918ea335ce0" Jan 20 00:54:54.462798 systemd[1]: Started sshd@21-10.0.0.160:22-10.0.0.1:46666.service - OpenSSH per-connection server daemon (10.0.0.1:46666). Jan 20 00:54:54.495339 sshd[5826]: Accepted publickey for core from 10.0.0.1 port 46666 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:54.496851 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:54.501356 systemd-logind[1444]: New session 22 of user core. Jan 20 00:54:54.508203 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:54:54.627605 sshd[5826]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:54.630894 systemd[1]: sshd@21-10.0.0.160:22-10.0.0.1:46666.service: Deactivated successfully. Jan 20 00:54:54.632691 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:54:54.634400 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:54:54.635878 systemd-logind[1444]: Removed session 22. Jan 20 00:54:55.099137 kubelet[2520]: E0120 00:54:55.098484 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"