Jan 24 00:56:53.147467 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:56:53.147490 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:56:53.147502 kernel: BIOS-provided physical RAM map: Jan 24 00:56:53.147508 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:56:53.147513 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:56:53.147519 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:56:53.147525 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:56:53.147531 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:56:53.147536 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:56:53.147544 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:56:53.147549 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:56:53.147555 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:56:53.147560 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:56:53.147566 kernel: NX (Execute Disable) protection: active Jan 24 00:56:53.147572 kernel: APIC: Static calls initialized Jan 24 00:56:53.147581 kernel: SMBIOS 2.8 present. Jan 24 00:56:53.147587 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:56:53.147593 kernel: Hypervisor detected: KVM Jan 24 00:56:53.147598 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:56:53.147604 kernel: kvm-clock: using sched offset of 4691457083 cycles Jan 24 00:56:53.147611 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:56:53.147617 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:56:53.147623 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:56:53.147629 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:56:53.147638 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:56:53.147644 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:56:53.147650 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:56:53.147656 kernel: Using GB pages for direct mapping Jan 24 00:56:53.147662 kernel: ACPI: Early table checksum verification disabled Jan 24 00:56:53.147668 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:56:53.147674 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:56:53.147680 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:56:53.147686 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:56:53.147694 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:56:53.147700 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:56:53.147706 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:56:53.147712 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:56:53.147724 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:56:53.147734 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:56:53.147746 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:56:53.147765 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:56:53.147778 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:56:53.147789 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:56:53.147802 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:56:53.147811 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:56:53.147822 kernel: No NUMA configuration found Jan 24 00:56:53.147833 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:56:53.147844 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:56:53.147861 kernel: Zone ranges: Jan 24 00:56:53.147867 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:56:53.147873 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:56:53.147880 kernel: Normal empty Jan 24 00:56:53.147886 kernel: Movable zone start for each node Jan 24 00:56:53.147892 kernel: Early memory node ranges Jan 24 00:56:53.147902 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:56:53.147913 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:56:53.147923 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:56:53.147938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:56:53.147950 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:56:53.147961 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:56:53.147971 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:56:53.147984 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:56:53.147994 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:56:53.148005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:56:53.148016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:56:53.148027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:56:53.148035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:56:53.148097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:56:53.148111 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:56:53.148122 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:56:53.148134 kernel: TSC deadline timer available Jan 24 00:56:53.148144 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:56:53.148157 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:56:53.148167 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:56:53.148178 kernel: kvm-guest: setup PV sched yield Jan 24 00:56:53.148189 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:56:53.148204 kernel: Booting paravirtualized kernel on KVM Jan 24 00:56:53.148214 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:56:53.148223 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:56:53.148232 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:56:53.148310 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:56:53.148318 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:56:53.148325 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:56:53.148331 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:56:53.148338 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:56:53.148349 kernel: random: crng init done Jan 24 00:56:53.148355 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:56:53.148362 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:56:53.148368 kernel: Fallback order for Node 0: 0 Jan 24 00:56:53.148374 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:56:53.148380 kernel: Policy zone: DMA32 Jan 24 00:56:53.148387 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:56:53.148393 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:56:53.148402 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:56:53.148408 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:56:53.148415 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:56:53.148421 kernel: Dynamic Preempt: voluntary Jan 24 00:56:53.148427 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:56:53.148434 kernel: rcu: RCU event tracing is enabled. Jan 24 00:56:53.148440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:56:53.148447 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:56:53.148453 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:56:53.148462 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:56:53.148468 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:56:53.148474 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:56:53.148481 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:56:53.148487 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:56:53.148493 kernel: Console: colour VGA+ 80x25 Jan 24 00:56:53.148499 kernel: printk: console [ttyS0] enabled Jan 24 00:56:53.148505 kernel: ACPI: Core revision 20230628 Jan 24 00:56:53.148512 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:56:53.148518 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:56:53.148526 kernel: x2apic enabled Jan 24 00:56:53.148533 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:56:53.148539 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:56:53.148546 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:56:53.148552 kernel: kvm-guest: setup PV IPIs Jan 24 00:56:53.148558 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:56:53.148575 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:56:53.148581 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:56:53.148588 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:56:53.148594 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:56:53.148601 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:56:53.148609 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:56:53.148616 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:56:53.148623 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:56:53.148629 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:56:53.148641 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:56:53.148658 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:56:53.148670 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:56:53.148684 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:56:53.148696 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:56:53.148708 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:56:53.148722 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:56:53.148733 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:56:53.148747 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:56:53.148764 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:56:53.148776 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:56:53.148783 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:56:53.148790 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:56:53.148796 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:56:53.148803 kernel: landlock: Up and running. Jan 24 00:56:53.148810 kernel: SELinux: Initializing. Jan 24 00:56:53.148816 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:56:53.148823 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:56:53.148832 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:56:53.148839 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:56:53.148846 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:56:53.148852 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:56:53.148859 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:56:53.148865 kernel: signal: max sigframe size: 1776 Jan 24 00:56:53.148872 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:56:53.148879 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:56:53.148886 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:56:53.148895 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:56:53.148901 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:56:53.148907 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:56:53.148914 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:56:53.148920 kernel: smpboot: Max logical packages: 1 Jan 24 00:56:53.148932 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:56:53.148945 kernel: devtmpfs: initialized Jan 24 00:56:53.148956 kernel: x86/mm: Memory block size: 128MB Jan 24 00:56:53.148966 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:56:53.148983 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:56:53.148995 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:56:53.149006 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:56:53.149017 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:56:53.149030 kernel: audit: type=2000 audit(1769216210.961:1): state=initialized audit_enabled=0 res=1 Jan 24 00:56:53.149041 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:56:53.149099 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:56:53.149107 kernel: cpuidle: using governor menu Jan 24 00:56:53.149113 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:56:53.149124 kernel: dca service started, version 1.12.1 Jan 24 00:56:53.149130 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:56:53.149137 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:56:53.149144 kernel: PCI: Using configuration type 1 for base access Jan 24 00:56:53.149150 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:56:53.149157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:56:53.149163 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:56:53.149170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:56:53.149177 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:56:53.149185 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:56:53.149192 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:56:53.149198 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:56:53.149205 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:56:53.149212 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:56:53.149218 kernel: ACPI: Interpreter enabled Jan 24 00:56:53.149225 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:56:53.149232 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:56:53.149286 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:56:53.149297 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:56:53.149304 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:56:53.149310 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:56:53.149547 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:56:53.149709 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:56:53.149901 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:56:53.149914 kernel: PCI host bridge to bus 0000:00 Jan 24 00:56:53.150151 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:56:53.150367 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:56:53.150511 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:56:53.150624 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:56:53.150732 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:56:53.150839 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:56:53.150952 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:56:53.151190 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:56:53.151438 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:56:53.151564 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:56:53.151682 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:56:53.151800 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:56:53.151917 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:56:53.152149 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:56:53.152362 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:56:53.152485 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:56:53.152605 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:56:53.152733 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:56:53.152853 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:56:53.152971 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:56:53.153197 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:56:53.153409 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:56:53.153532 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:56:53.153665 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:56:53.153841 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:56:53.154026 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:56:53.154226 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:56:53.154415 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:56:53.154544 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:56:53.154681 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:56:53.154871 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:56:53.155020 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:56:53.155194 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:56:53.155205 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:56:53.155217 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:56:53.155224 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:56:53.155230 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:56:53.155289 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:56:53.155298 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:56:53.155305 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:56:53.155312 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:56:53.155318 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:56:53.155325 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:56:53.155335 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:56:53.155342 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:56:53.155348 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:56:53.155355 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:56:53.155361 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:56:53.155368 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:56:53.155374 kernel: iommu: Default domain type: Translated Jan 24 00:56:53.155381 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:56:53.155388 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:56:53.155397 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:56:53.155403 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:56:53.155410 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:56:53.155535 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:56:53.155653 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:56:53.155770 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:56:53.155779 kernel: vgaarb: loaded Jan 24 00:56:53.155786 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:56:53.155796 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:56:53.155803 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:56:53.155809 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:56:53.155816 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:56:53.155823 kernel: pnp: PnP ACPI init Jan 24 00:56:53.155951 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:56:53.155961 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:56:53.155968 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:56:53.155978 kernel: NET: Registered PF_INET protocol family Jan 24 00:56:53.155985 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:56:53.155992 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:56:53.155998 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:56:53.156005 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:56:53.156011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:56:53.156018 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:56:53.156024 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:56:53.156031 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:56:53.156040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:56:53.156084 kernel: NET: Registered PF_XDP protocol family Jan 24 00:56:53.156204 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:56:53.156371 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:56:53.156512 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:56:53.156623 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:56:53.156739 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:56:53.156897 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:56:53.156914 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:56:53.156926 kernel: Initialise system trusted keyrings Jan 24 00:56:53.156939 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:56:53.156953 kernel: Key type asymmetric registered Jan 24 00:56:53.156962 kernel: Asymmetric key parser 'x509' registered Jan 24 00:56:53.156972 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:56:53.156985 kernel: io scheduler mq-deadline registered Jan 24 00:56:53.156996 kernel: io scheduler kyber registered Jan 24 00:56:53.157007 kernel: io scheduler bfq registered Jan 24 00:56:53.157020 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:56:53.157037 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:56:53.157105 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:56:53.157113 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:56:53.157120 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:56:53.157127 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:56:53.157134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:56:53.157140 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:56:53.157147 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:56:53.157582 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:56:53.157612 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:56:53.157867 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:56:53.158461 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:56:52 UTC (1769216212) Jan 24 00:56:53.158690 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:56:53.158712 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:56:53.158724 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:56:53.158737 kernel: Segment Routing with IPv6 Jan 24 00:56:53.158754 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:56:53.158765 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:56:53.158777 kernel: Key type dns_resolver registered Jan 24 00:56:53.158790 kernel: IPI shorthand broadcast: enabled Jan 24 00:56:53.158802 kernel: sched_clock: Marking stable (1309026440, 330658324)->(2049676740, -409991976) Jan 24 00:56:53.158813 kernel: registered taskstats version 1 Jan 24 00:56:53.158825 kernel: Loading compiled-in X.509 certificates Jan 24 00:56:53.158836 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:56:53.158847 kernel: Key type .fscrypt registered Jan 24 00:56:53.158858 kernel: Key type fscrypt-provisioning registered Jan 24 00:56:53.158874 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:56:53.158883 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:56:53.158890 kernel: ima: No architecture policies found Jan 24 00:56:53.158896 kernel: clk: Disabling unused clocks Jan 24 00:56:53.158907 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:56:53.158919 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:56:53.158932 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:56:53.158942 kernel: Run /init as init process Jan 24 00:56:53.158960 kernel: with arguments: Jan 24 00:56:53.158970 kernel: /init Jan 24 00:56:53.158983 kernel: with environment: Jan 24 00:56:53.158994 kernel: HOME=/ Jan 24 00:56:53.159006 kernel: TERM=linux Jan 24 00:56:53.159019 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:56:53.159034 systemd[1]: Detected virtualization kvm. Jan 24 00:56:53.159129 systemd[1]: Detected architecture x86-64. Jan 24 00:56:53.159150 systemd[1]: Running in initrd. Jan 24 00:56:53.159162 systemd[1]: No hostname configured, using default hostname. Jan 24 00:56:53.159175 systemd[1]: Hostname set to . Jan 24 00:56:53.159187 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:56:53.159200 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:56:53.159210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:56:53.159223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:56:53.159236 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:56:53.159526 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:56:53.159540 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:56:53.159554 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:56:53.159570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:56:53.159582 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:56:53.159594 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:56:53.159605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:56:53.159623 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:56:53.159635 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:56:53.159647 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:56:53.159680 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:56:53.159696 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:56:53.159709 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:56:53.159724 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:56:53.159736 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:56:53.159751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:56:53.159762 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:56:53.159777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:56:53.159789 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:56:53.159802 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:56:53.159814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:56:53.159826 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:56:53.159875 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:56:53.159888 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:56:53.159900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:56:53.159938 systemd-journald[195]: Collecting audit messages is disabled. Jan 24 00:56:53.159974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:56:53.159988 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:56:53.160000 systemd-journald[195]: Journal started Jan 24 00:56:53.160026 systemd-journald[195]: Runtime Journal (/run/log/journal/fd00a6ff485e41f18aca73ce895347c0) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:56:53.176156 systemd-modules-load[196]: Inserted module 'overlay' Jan 24 00:56:53.420152 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:56:53.420196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:56:53.420214 kernel: Bridge firewalling registered Jan 24 00:56:53.183219 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:56:53.214169 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 24 00:56:53.435295 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:56:53.442601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:56:53.451718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:53.479755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:56:53.484420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:56:53.488463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:56:53.492139 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:56:53.503447 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:56:53.505399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:56:53.516848 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:56:53.527403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:56:53.536831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:56:53.546595 dracut-cmdline[222]: dracut-dracut-053 Jan 24 00:56:53.549403 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:56:53.548973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:56:53.584302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:56:53.600563 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:56:53.638582 systemd-resolved[274]: Positive Trust Anchors: Jan 24 00:56:53.638618 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:56:53.638646 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:56:53.671101 systemd-resolved[274]: Defaulting to hostname 'linux'. Jan 24 00:56:53.676660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:56:53.691404 kernel: SCSI subsystem initialized Jan 24 00:56:53.682530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:56:53.699349 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:56:53.715365 kernel: iscsi: registered transport (tcp) Jan 24 00:56:53.747300 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:56:53.747414 kernel: QLogic iSCSI HBA Driver Jan 24 00:56:53.810336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:56:53.821746 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:56:53.863478 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:56:53.863559 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:56:53.867464 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:56:53.918325 kernel: raid6: avx2x4 gen() 32985 MB/s Jan 24 00:56:53.936354 kernel: raid6: avx2x2 gen() 27413 MB/s Jan 24 00:56:53.956371 kernel: raid6: avx2x1 gen() 23074 MB/s Jan 24 00:56:53.956424 kernel: raid6: using algorithm avx2x4 gen() 32985 MB/s Jan 24 00:56:53.977618 kernel: raid6: .... xor() 4852 MB/s, rmw enabled Jan 24 00:56:53.977690 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:56:54.000353 kernel: xor: automatically using best checksumming function avx Jan 24 00:56:54.197369 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:56:54.215539 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:56:54.237496 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:56:54.261398 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 24 00:56:54.269193 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:56:54.304618 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:56:54.335457 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 24 00:56:54.379704 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:56:54.407490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:56:54.495123 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:56:54.513744 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:56:54.530179 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:56:54.538545 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:56:54.542906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:56:54.555516 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:56:54.568313 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:56:54.570942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:56:54.586031 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:56:54.596668 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:56:54.599411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:56:54.613724 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:56:54.599564 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:56:54.638152 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:56:54.638197 kernel: GPT:9289727 != 19775487 Jan 24 00:56:54.638343 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:56:54.638356 kernel: GPT:9289727 != 19775487 Jan 24 00:56:54.638370 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:56:54.638380 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:56:54.612727 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:56:54.650475 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:56:54.637836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:56:54.638624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:54.650519 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:56:54.685630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:56:54.710439 kernel: AES CTR mode by8 optimization enabled Jan 24 00:56:54.710468 kernel: libata version 3.00 loaded. Jan 24 00:56:54.710479 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Jan 24 00:56:54.710489 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (471) Jan 24 00:56:54.726434 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:56:54.726734 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:56:54.729329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:56:54.978161 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:56:54.978573 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:56:54.978722 kernel: scsi host0: ahci Jan 24 00:56:54.978887 kernel: scsi host1: ahci Jan 24 00:56:54.979038 kernel: scsi host2: ahci Jan 24 00:56:54.979305 kernel: scsi host3: ahci Jan 24 00:56:54.979468 kernel: scsi host4: ahci Jan 24 00:56:54.979614 kernel: scsi host5: ahci Jan 24 00:56:54.979758 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:56:54.979769 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:56:54.979778 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:56:54.979788 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:56:54.979797 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:56:54.979810 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:56:54.982390 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:56:54.987713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:56:54.995439 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:56:54.996055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:56:55.032619 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:56:55.035630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:55.040721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:56:55.084927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:56:55.084950 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:56:55.084961 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:56:55.084970 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:56:55.084980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:56:55.084994 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:56:55.085011 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:56:55.085024 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:56:55.085039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:56:55.085060 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:56:55.085125 kernel: ata3.00: applying bridge limits Jan 24 00:56:55.085142 disk-uuid[554]: Primary Header is updated. Jan 24 00:56:55.085142 disk-uuid[554]: Secondary Entries is updated. Jan 24 00:56:55.085142 disk-uuid[554]: Secondary Header is updated. Jan 24 00:56:55.102235 kernel: ata3.00: configured for UDMA/100 Jan 24 00:56:55.102306 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:56:55.140702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:56:55.177492 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:56:55.177754 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:56:55.191302 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:56:56.073321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:56:56.073378 disk-uuid[557]: The operation has completed successfully. Jan 24 00:56:56.107046 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:56:56.107306 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:56:56.130442 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:56:56.138034 sh[599]: Success Jan 24 00:56:56.156293 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:56:56.198837 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:56:56.211932 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:56:56.215717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:56:56.235830 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:56:56.235858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:56.235869 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:56:56.242879 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:56:56.242901 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:56:56.254216 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:56:56.257613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:56:56.277659 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:56:56.280341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:56:56.312888 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:56.312922 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:56.312934 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:56:56.322351 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:56:56.335647 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:56:56.342319 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:56.353061 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:56:56.366509 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:56:56.437223 ignition[707]: Ignition 2.19.0 Jan 24 00:56:56.437328 ignition[707]: Stage: fetch-offline Jan 24 00:56:56.437369 ignition[707]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:56.437379 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:56.437471 ignition[707]: parsed url from cmdline: "" Jan 24 00:56:56.437476 ignition[707]: no config URL provided Jan 24 00:56:56.437482 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:56:56.437493 ignition[707]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:56:56.437522 ignition[707]: op(1): [started] loading QEMU firmware config module Jan 24 00:56:56.437528 ignition[707]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:56:56.450405 ignition[707]: op(1): [finished] loading QEMU firmware config module Jan 24 00:56:56.485820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:56:56.508479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:56:56.539555 systemd-networkd[787]: lo: Link UP Jan 24 00:56:56.539594 systemd-networkd[787]: lo: Gained carrier Jan 24 00:56:56.541371 systemd-networkd[787]: Enumeration completed Jan 24 00:56:56.542604 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:56.542608 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:56:56.543579 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:56:56.546646 systemd-networkd[787]: eth0: Link UP Jan 24 00:56:56.546650 systemd-networkd[787]: eth0: Gained carrier Jan 24 00:56:56.546657 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:56.555148 systemd[1]: Reached target network.target - Network. Jan 24 00:56:56.589350 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:56:56.754708 ignition[707]: parsing config with SHA512: 493a9780148e233b8d5092d2c0132a59a9f8c344a196cab93b0df5876e79a2335b28144dc9ae5f76bde57c587f93e16ec7e6b36b9fd2e23bdfc05812b198eb72 Jan 24 00:56:56.761326 unknown[707]: fetched base config from "system" Jan 24 00:56:56.761883 ignition[707]: fetch-offline: fetch-offline passed Jan 24 00:56:56.761338 unknown[707]: fetched user config from "qemu" Jan 24 00:56:56.761962 ignition[707]: Ignition finished successfully Jan 24 00:56:56.762538 systemd-resolved[274]: Detected conflict on linux IN A 10.0.0.132 Jan 24 00:56:56.762548 systemd-resolved[274]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jan 24 00:56:56.763953 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:56:56.769967 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:56:56.785456 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:56:56.807888 ignition[791]: Ignition 2.19.0 Jan 24 00:56:56.807895 ignition[791]: Stage: kargs Jan 24 00:56:56.814139 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:56:56.808056 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:56.820037 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:56:56.808068 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:56.809405 ignition[791]: kargs: kargs passed Jan 24 00:56:56.809448 ignition[791]: Ignition finished successfully Jan 24 00:56:56.843777 ignition[800]: Ignition 2.19.0 Jan 24 00:56:56.843810 ignition[800]: Stage: disks Jan 24 00:56:56.843963 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:56.846956 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:56:56.843976 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:56.852785 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:56:56.845340 ignition[800]: disks: disks passed Jan 24 00:56:56.859507 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:56:56.845382 ignition[800]: Ignition finished successfully Jan 24 00:56:56.867915 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:56:56.871538 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:56:56.875183 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:56:56.914504 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:56:56.891649 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:56:56.915626 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:56:56.937554 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:56:57.060317 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:56:57.060746 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:56:57.065359 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:56:57.084445 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:56:57.108380 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Jan 24 00:56:57.108439 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:57.108467 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:57.108486 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:56:57.089764 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:56:57.124912 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:56:57.111477 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:56:57.111522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:56:57.111548 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:56:57.126344 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:56:57.139126 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:56:57.163506 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:56:57.210548 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:56:57.217356 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:56:57.226864 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:56:57.233592 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:56:57.371663 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:56:57.388433 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:56:57.395471 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:56:57.403052 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:56:57.414336 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:57.434513 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:56:57.449926 ignition[930]: INFO : Ignition 2.19.0 Jan 24 00:56:57.453500 ignition[930]: INFO : Stage: mount Jan 24 00:56:57.453500 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:57.453500 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:57.464232 ignition[930]: INFO : mount: mount passed Jan 24 00:56:57.464232 ignition[930]: INFO : Ignition finished successfully Jan 24 00:56:57.472338 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:56:57.490452 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:56:57.505676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:56:57.526330 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Jan 24 00:56:57.533994 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:57.534032 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:57.534052 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:56:57.545356 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:56:57.547359 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:56:57.590180 ignition[963]: INFO : Ignition 2.19.0 Jan 24 00:56:57.590180 ignition[963]: INFO : Stage: files Jan 24 00:56:57.596460 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:57.596460 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:57.606145 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:56:57.611043 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:56:57.611043 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:56:57.627951 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:56:57.633952 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:56:57.640022 unknown[963]: wrote ssh authorized keys file for user: core Jan 24 00:56:57.644397 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:56:57.650705 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:56:57.650705 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:56:57.650705 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:56:57.650705 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:56:57.701880 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:56:57.819773 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:56:57.819773 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:57.833517 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:56:58.235140 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 00:56:58.470686 systemd-networkd[787]: eth0: Gained IPv6LL Jan 24 00:56:58.716910 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:58.716910 ignition[963]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 24 00:56:58.734807 ignition[963]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 24 00:56:58.745588 ignition[963]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:56:58.820456 ignition[963]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:56:58.826991 ignition[963]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:56:58.826991 ignition[963]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:56:58.826991 ignition[963]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:56:58.826991 ignition[963]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:56:58.826991 ignition[963]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:56:58.826991 ignition[963]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:56:58.826991 ignition[963]: INFO : files: files passed Jan 24 00:56:58.826991 ignition[963]: INFO : Ignition finished successfully Jan 24 00:56:58.867236 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:56:58.880695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:56:58.883341 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:56:58.904566 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:56:58.904768 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:56:58.920381 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:56:58.929495 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:58.929495 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:58.940773 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:58.948308 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:56:58.958350 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:56:58.979634 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:56:59.021539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:56:59.025235 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:56:59.034649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:56:59.041808 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:56:59.049212 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:56:59.066618 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:56:59.089407 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:56:59.109674 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:56:59.131803 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:56:59.140827 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:56:59.149725 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:56:59.156232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:56:59.159713 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:56:59.168464 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:56:59.175766 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:56:59.182490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:56:59.190421 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:56:59.198589 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:56:59.206642 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:56:59.213878 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:56:59.224528 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:56:59.232423 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:56:59.240165 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:56:59.246409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:56:59.250216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:56:59.257975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:56:59.265767 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:56:59.274771 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:56:59.278341 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:56:59.289026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:56:59.292894 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:56:59.304046 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:56:59.309592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:56:59.324230 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:56:59.334187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:56:59.340365 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:56:59.354828 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:56:59.364973 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:56:59.374833 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:56:59.378873 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:56:59.388791 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:56:59.392828 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:56:59.402625 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:56:59.407098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:56:59.416233 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:56:59.416474 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:56:59.445758 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:56:59.455416 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:56:59.459709 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:56:59.470756 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:56:59.477893 ignition[1018]: INFO : Ignition 2.19.0 Jan 24 00:56:59.477893 ignition[1018]: INFO : Stage: umount Jan 24 00:56:59.477893 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:59.477893 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:59.477893 ignition[1018]: INFO : umount: umount passed Jan 24 00:56:59.477893 ignition[1018]: INFO : Ignition finished successfully Jan 24 00:56:59.474425 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:56:59.477933 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:56:59.504559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:56:59.504742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:56:59.517233 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:56:59.518420 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:56:59.518531 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:56:59.531506 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:56:59.531623 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:56:59.541227 systemd[1]: Stopped target network.target - Network. Jan 24 00:56:59.543877 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:56:59.543965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:56:59.555426 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:56:59.555482 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:56:59.562323 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:56:59.562375 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:56:59.569489 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:56:59.569541 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:56:59.571309 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:56:59.579091 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:56:59.586175 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:56:59.586398 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:56:59.593051 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:56:59.593224 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:56:59.600041 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:56:59.600225 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:56:59.607602 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:56:59.607680 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:56:59.664546 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 24 00:56:59.669825 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:56:59.670149 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:56:59.673606 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:56:59.673650 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:56:59.700677 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:56:59.703010 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:56:59.703088 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:56:59.709187 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:56:59.709331 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:56:59.720881 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:56:59.720977 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:56:59.729887 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:56:59.757162 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:56:59.760992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:56:59.770901 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:56:59.770997 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:56:59.782924 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:56:59.783057 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:56:59.791329 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:56:59.791442 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:56:59.806543 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:56:59.806647 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:56:59.819795 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:56:59.819895 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:56:59.851567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:56:59.853185 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:56:59.853378 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:56:59.865046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:56:59.865174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:59.874830 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:56:59.875019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:56:59.913858 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:56:59.914209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:56:59.923872 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:56:59.936520 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:56:59.949213 systemd[1]: Switching root. Jan 24 00:56:59.988679 systemd-journald[195]: Journal stopped Jan 24 00:57:01.537945 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 24 00:57:01.538012 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:57:01.538029 kernel: SELinux: policy capability open_perms=1 Jan 24 00:57:01.538040 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:57:01.538053 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:57:01.538064 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:57:01.538074 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:57:01.538084 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:57:01.538100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:57:01.538111 kernel: audit: type=1403 audit(1769216220.253:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:57:01.538158 systemd[1]: Successfully loaded SELinux policy in 60.350ms. Jan 24 00:57:01.538182 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.019ms. Jan 24 00:57:01.538197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:57:01.538208 systemd[1]: Detected virtualization kvm. Jan 24 00:57:01.538219 systemd[1]: Detected architecture x86-64. Jan 24 00:57:01.538229 systemd[1]: Detected first boot. Jan 24 00:57:01.538293 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:57:01.538306 zram_generator::config[1081]: No configuration found. Jan 24 00:57:01.538317 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:57:01.538334 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:57:01.538349 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:57:01.538361 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:57:01.538372 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:57:01.538382 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:57:01.538393 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:57:01.538404 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:57:01.538415 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:57:01.538426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:57:01.538439 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:57:01.538450 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:57:01.538461 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:57:01.538471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:57:01.538482 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:57:01.538493 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:57:01.538503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:57:01.538514 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:57:01.538525 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:57:01.538538 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:57:01.538548 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:57:01.538559 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:57:01.538570 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:57:01.538582 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:57:01.538593 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:57:01.538603 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:57:01.538614 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:57:01.538627 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:57:01.538638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:57:01.538648 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:57:01.538659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:57:01.538669 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:57:01.538680 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:57:01.538691 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:57:01.538701 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:57:01.538713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:01.538726 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:57:01.538737 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:57:01.538747 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:57:01.538758 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:57:01.538769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:57:01.538779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:57:01.538790 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:57:01.538801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:57:01.538812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:57:01.538825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:57:01.538836 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:57:01.538847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:57:01.538858 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:57:01.538869 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 24 00:57:01.538880 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 24 00:57:01.538891 kernel: fuse: init (API version 7.39) Jan 24 00:57:01.538901 kernel: ACPI: bus type drm_connector registered Jan 24 00:57:01.538914 kernel: loop: module loaded Jan 24 00:57:01.538924 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:57:01.538935 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:57:01.538966 systemd-journald[1180]: Collecting audit messages is disabled. Jan 24 00:57:01.538987 systemd-journald[1180]: Journal started Jan 24 00:57:01.539006 systemd-journald[1180]: Runtime Journal (/run/log/journal/fd00a6ff485e41f18aca73ce895347c0) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:57:01.545526 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:57:01.553355 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:57:01.567611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:57:01.581453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:01.588729 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:57:01.594535 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:57:01.599358 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:57:01.604391 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:57:01.608765 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:57:01.613670 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:57:01.619758 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:57:01.624541 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:57:01.629171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:57:01.634477 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:57:01.634817 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:57:01.640686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:57:01.640998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:57:01.646628 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:57:01.646934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:57:01.651935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:57:01.652385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:57:01.658005 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:57:01.658455 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:57:01.663761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:57:01.664109 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:57:01.669533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:57:01.675046 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:57:01.680975 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:57:01.700520 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:57:01.714421 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:57:01.720713 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:57:01.724708 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:57:01.726842 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:57:01.729552 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:57:01.736584 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:57:01.740080 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:57:01.745879 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:57:01.749442 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:57:01.759025 systemd-journald[1180]: Time spent on flushing to /var/log/journal/fd00a6ff485e41f18aca73ce895347c0 is 15.466ms for 931 entries. Jan 24 00:57:01.759025 systemd-journald[1180]: System Journal (/var/log/journal/fd00a6ff485e41f18aca73ce895347c0) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:57:01.801723 systemd-journald[1180]: Received client request to flush runtime journal. Jan 24 00:57:01.765686 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:57:01.773486 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:57:01.778371 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:57:01.783695 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:57:01.790593 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:57:01.805870 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:57:01.821221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:57:01.831825 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 24 00:57:01.831873 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 24 00:57:01.839104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:57:01.852482 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:57:01.858209 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:57:01.867709 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:57:01.872027 udevadm[1234]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:57:01.906981 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:57:01.918508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:57:01.943573 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 24 00:57:01.943624 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 24 00:57:01.949744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:57:02.192945 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:57:02.210446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:57:02.241945 systemd-udevd[1246]: Using default interface naming scheme 'v255'. Jan 24 00:57:02.269476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:57:02.290477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:57:02.323443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1263) Jan 24 00:57:02.323506 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:57:02.337048 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 24 00:57:02.388440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:57:02.392363 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:57:02.396418 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:57:02.398328 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:57:02.412852 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:57:02.413363 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:57:02.413743 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:57:02.462340 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:57:02.484382 systemd-networkd[1258]: lo: Link UP Jan 24 00:57:02.484390 systemd-networkd[1258]: lo: Gained carrier Jan 24 00:57:02.486125 systemd-networkd[1258]: Enumeration completed Jan 24 00:57:02.486669 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:57:02.491204 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:57:02.491348 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:57:02.492565 systemd-networkd[1258]: eth0: Link UP Jan 24 00:57:02.492617 systemd-networkd[1258]: eth0: Gained carrier Jan 24 00:57:02.492680 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:57:02.503362 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:57:02.525505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:57:02.583221 systemd-networkd[1258]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:57:02.615336 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:57:02.646639 kernel: kvm_amd: TSC scaling supported Jan 24 00:57:02.646704 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:57:02.648843 kernel: kvm_amd: Nested Paging enabled Jan 24 00:57:02.651449 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:57:02.651503 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:57:02.721000 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:57:02.751356 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:57:02.915574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:57:02.932465 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:57:02.948309 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:57:02.980964 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:57:02.986531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:57:03.002417 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:57:03.011187 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:57:03.043640 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:57:03.049011 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:57:03.054378 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:57:03.054437 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:57:03.058554 systemd[1]: Reached target machines.target - Containers. Jan 24 00:57:03.064102 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:57:03.083446 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:57:03.090397 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:57:03.094938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:57:03.096117 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:57:03.103338 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:57:03.111451 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:57:03.118863 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:57:03.125754 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:57:03.133879 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 00:57:03.142868 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:57:03.143714 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:57:03.171344 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:57:03.211332 kernel: loop1: detected capacity change from 0 to 224512 Jan 24 00:57:03.255314 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 00:57:03.305374 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:57:03.328334 kernel: loop4: detected capacity change from 0 to 224512 Jan 24 00:57:03.345405 kernel: loop5: detected capacity change from 0 to 142488 Jan 24 00:57:03.364464 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:57:03.365112 (sd-merge)[1315]: Merged extensions into '/usr'. Jan 24 00:57:03.369980 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:57:03.370045 systemd[1]: Reloading... Jan 24 00:57:03.429330 zram_generator::config[1343]: No configuration found. Jan 24 00:57:03.437422 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:57:03.556813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:57:03.618481 systemd[1]: Reloading finished in 247 ms. Jan 24 00:57:03.637623 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:57:03.642573 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:57:03.663637 systemd[1]: Starting ensure-sysext.service... Jan 24 00:57:03.668667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:57:03.675316 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:57:03.675395 systemd[1]: Reloading... Jan 24 00:57:03.695812 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:57:03.696518 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:57:03.697755 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:57:03.698120 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Jan 24 00:57:03.698413 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Jan 24 00:57:03.702110 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:57:03.702319 systemd-tmpfiles[1388]: Skipping /boot Jan 24 00:57:03.718017 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:57:03.718773 systemd-tmpfiles[1388]: Skipping /boot Jan 24 00:57:03.736367 zram_generator::config[1416]: No configuration found. Jan 24 00:57:03.855946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:57:03.910577 systemd-networkd[1258]: eth0: Gained IPv6LL Jan 24 00:57:03.919988 systemd[1]: Reloading finished in 244 ms. Jan 24 00:57:03.944632 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:57:03.959059 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:57:03.973951 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:57:03.980529 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:57:03.987898 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:57:03.996644 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:57:04.006430 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:57:04.014573 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:04.014724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:57:04.018514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:57:04.028512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:57:04.033699 augenrules[1486]: No rules Jan 24 00:57:04.039554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:57:04.044848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:57:04.045203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:04.048125 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:57:04.052760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:57:04.053002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:57:04.058232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:57:04.058731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:57:04.063627 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:57:04.068760 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:57:04.069104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:57:04.081859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:04.082670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:57:04.091530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:57:04.096359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:57:04.101790 systemd-resolved[1474]: Positive Trust Anchors: Jan 24 00:57:04.101823 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:57:04.101850 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:57:04.103636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:57:04.106513 systemd-resolved[1474]: Defaulting to hostname 'linux'. Jan 24 00:57:04.107191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:57:04.110746 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:57:04.114539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:04.117063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:57:04.122304 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:57:04.127554 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:57:04.132530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:57:04.132760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:57:04.137769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:57:04.138024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:57:04.142948 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:57:04.143490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:57:04.148060 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:57:04.166219 systemd[1]: Reached target network.target - Network. Jan 24 00:57:04.170122 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:57:04.174418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:57:04.178781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:04.179203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:57:04.189691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:57:04.194646 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:57:04.199440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:57:04.205971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:57:04.209788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:57:04.209995 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:57:04.210212 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:57:04.211913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:57:04.212202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:57:04.217361 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:57:04.217585 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:57:04.222225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:57:04.222521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:57:04.227616 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:57:04.227880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:57:04.234499 systemd[1]: Finished ensure-sysext.service. Jan 24 00:57:04.241977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:57:04.242074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:57:04.261635 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:57:04.326970 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:57:04.328346 systemd-timesyncd[1535]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:57:04.328415 systemd-timesyncd[1535]: Initial clock synchronization to Sat 2026-01-24 00:57:04.669620 UTC. Jan 24 00:57:04.334370 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:57:04.340513 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:57:04.347362 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:57:04.354110 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:57:04.361039 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:57:04.361112 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:57:04.366219 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:57:04.372378 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:57:04.378356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:57:04.385124 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:57:04.389549 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:57:04.395362 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:57:04.399966 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:57:04.405492 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:57:04.411764 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:57:04.415623 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:57:04.419640 systemd[1]: System is tainted: cgroupsv1 Jan 24 00:57:04.419708 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:57:04.419733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:57:04.421227 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:57:04.428794 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:57:04.435949 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:57:04.441773 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:57:04.449513 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:57:04.455071 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:57:04.456107 jq[1544]: false Jan 24 00:57:04.457439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:57:04.468140 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:57:04.474761 extend-filesystems[1545]: Found loop3 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found loop4 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found loop5 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found sr0 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda1 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda2 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda3 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found usr Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda4 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda6 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda7 Jan 24 00:57:04.474761 extend-filesystems[1545]: Found vda9 Jan 24 00:57:04.474761 extend-filesystems[1545]: Checking size of /dev/vda9 Jan 24 00:57:04.633695 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1261) Jan 24 00:57:04.633737 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:57:04.633761 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:57:04.482848 dbus-daemon[1542]: [system] SELinux support is enabled Jan 24 00:57:04.638132 extend-filesystems[1545]: Resized partition /dev/vda9 Jan 24 00:57:04.495048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:57:04.646357 extend-filesystems[1556]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:57:04.646357 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:57:04.646357 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:57:04.646357 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:57:04.509699 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:57:04.675584 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Jan 24 00:57:04.516434 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:57:04.519823 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:57:04.546234 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:57:04.554009 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:57:04.559439 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:57:04.676890 update_engine[1563]: I20260124 00:57:04.618336 1563 main.cc:92] Flatcar Update Engine starting Jan 24 00:57:04.676890 update_engine[1563]: I20260124 00:57:04.620376 1563 update_check_scheduler.cc:74] Next update check in 3m26s Jan 24 00:57:04.588049 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:57:04.677387 jq[1571]: true Jan 24 00:57:04.601014 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:57:04.636864 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:57:04.637216 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:57:04.637619 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:57:04.640920 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:57:04.650500 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:57:04.650886 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:57:04.655824 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:57:04.657915 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:57:04.657937 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:57:04.660410 systemd-logind[1562]: New seat seat0. Jan 24 00:57:04.663503 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:57:04.663783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:57:04.670489 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:57:04.699371 jq[1592]: true Jan 24 00:57:04.709070 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:57:04.709494 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:57:04.712800 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:57:04.722681 tar[1590]: linux-amd64/LICENSE Jan 24 00:57:04.722949 tar[1590]: linux-amd64/helm Jan 24 00:57:04.723718 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:57:04.732879 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:57:04.738371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:57:04.738667 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:57:04.738779 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:57:04.745745 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:57:04.745845 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:57:04.752954 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:57:04.760672 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:57:04.771647 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:57:04.788142 bash[1625]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:57:04.779925 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:57:04.790612 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:57:04.809888 locksmithd[1626]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:57:04.809895 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:57:04.824539 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:57:04.838592 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:57:04.838898 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:57:04.855060 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:57:04.877102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:57:04.890617 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:57:04.899338 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:57:04.905098 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:57:04.967345 containerd[1593]: time="2026-01-24T00:57:04.966734569Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:57:04.987236 containerd[1593]: time="2026-01-24T00:57:04.987197904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:57:04.990334 containerd[1593]: time="2026-01-24T00:57:04.990304352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:57:04.990432 containerd[1593]: time="2026-01-24T00:57:04.990416111Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:57:04.990496 containerd[1593]: time="2026-01-24T00:57:04.990478457Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:57:04.990871 containerd[1593]: time="2026-01-24T00:57:04.990686495Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:57:04.990871 containerd[1593]: time="2026-01-24T00:57:04.990716933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:57:04.990871 containerd[1593]: time="2026-01-24T00:57:04.990786152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:57:04.990871 containerd[1593]: time="2026-01-24T00:57:04.990802502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:57:04.991341 containerd[1593]: time="2026-01-24T00:57:04.991317162Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:57:04.991396 containerd[1593]: time="2026-01-24T00:57:04.991383647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:57:04.991444 containerd[1593]: time="2026-01-24T00:57:04.991431707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:57:04.991749 containerd[1593]: time="2026-01-24T00:57:04.991473805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:57:04.991904 containerd[1593]: time="2026-01-24T00:57:04.991885073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:57:04.992959 containerd[1593]: time="2026-01-24T00:57:04.992462601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:57:04.992959 containerd[1593]: time="2026-01-24T00:57:04.992671480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:57:04.992959 containerd[1593]: time="2026-01-24T00:57:04.992688282Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:57:04.992959 containerd[1593]: time="2026-01-24T00:57:04.992781356Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:57:04.992959 containerd[1593]: time="2026-01-24T00:57:04.992840386Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:57:04.999217 containerd[1593]: time="2026-01-24T00:57:04.999140645Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:57:04.999413 containerd[1593]: time="2026-01-24T00:57:04.999397124Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:57:04.999598 containerd[1593]: time="2026-01-24T00:57:04.999575527Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:57:04.999683 containerd[1593]: time="2026-01-24T00:57:04.999668331Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:57:04.999756 containerd[1593]: time="2026-01-24T00:57:04.999742749Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:57:05.000051 containerd[1593]: time="2026-01-24T00:57:04.999962179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:57:05.000848 containerd[1593]: time="2026-01-24T00:57:05.000727981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001030279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001051182Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001071720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001084798Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001096947Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001109012Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001121821Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001139339Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001151895Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001163240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001175473Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001255126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001271078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.001615 containerd[1593]: time="2026-01-24T00:57:05.001344943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001358378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001371300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001383983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001394700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001415175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001460815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001475648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001489825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001501202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001512180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001529458Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001550027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001560870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002038 containerd[1593]: time="2026-01-24T00:57:05.001570116Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:57:05.002481 containerd[1593]: time="2026-01-24T00:57:05.002404259Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:57:05.002592 containerd[1593]: time="2026-01-24T00:57:05.002576017Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:57:05.002637 containerd[1593]: time="2026-01-24T00:57:05.002625691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:57:05.002695 containerd[1593]: time="2026-01-24T00:57:05.002681349Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:57:05.002735 containerd[1593]: time="2026-01-24T00:57:05.002725047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.002778 containerd[1593]: time="2026-01-24T00:57:05.002767824Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:57:05.002823 containerd[1593]: time="2026-01-24T00:57:05.002813078Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:57:05.002864 containerd[1593]: time="2026-01-24T00:57:05.002853662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:57:05.003205 containerd[1593]: time="2026-01-24T00:57:05.003091797Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:57:05.004643 containerd[1593]: time="2026-01-24T00:57:05.003590914Z" level=info msg="Connect containerd service" Jan 24 00:57:05.004643 containerd[1593]: time="2026-01-24T00:57:05.003629911Z" level=info msg="using legacy CRI server" Jan 24 00:57:05.004643 containerd[1593]: time="2026-01-24T00:57:05.003638278Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:57:05.004643 containerd[1593]: time="2026-01-24T00:57:05.003707340Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:57:05.004876 containerd[1593]: time="2026-01-24T00:57:05.004855689Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:57:05.005104 containerd[1593]: time="2026-01-24T00:57:05.005077862Z" level=info msg="Start subscribing containerd event" Jan 24 00:57:05.005317 containerd[1593]: time="2026-01-24T00:57:05.005200826Z" level=info msg="Start recovering state" Jan 24 00:57:05.005542 containerd[1593]: time="2026-01-24T00:57:05.005527683Z" level=info msg="Start event monitor" Jan 24 00:57:05.006058 containerd[1593]: time="2026-01-24T00:57:05.006040756Z" level=info msg="Start snapshots syncer" Jan 24 00:57:05.006565 containerd[1593]: time="2026-01-24T00:57:05.006547583Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:57:05.006617 containerd[1593]: time="2026-01-24T00:57:05.006606803Z" level=info msg="Start streaming server" Jan 24 00:57:05.006764 containerd[1593]: time="2026-01-24T00:57:05.006489219Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:57:05.006916 containerd[1593]: time="2026-01-24T00:57:05.006805055Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:57:05.007841 containerd[1593]: time="2026-01-24T00:57:05.006950343Z" level=info msg="containerd successfully booted in 0.041734s" Jan 24 00:57:05.007095 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:57:05.262128 tar[1590]: linux-amd64/README.md Jan 24 00:57:05.279259 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:57:05.710594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:57:05.717677 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:57:05.723752 systemd[1]: Startup finished in 8.948s (kernel) + 5.529s (userspace) = 14.478s. Jan 24 00:57:05.797419 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:57:06.385328 kubelet[1675]: E0124 00:57:06.385158 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:57:06.389191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:57:06.389670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:57:07.125398 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:57:07.137606 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:50004.service - OpenSSH per-connection server daemon (10.0.0.1:50004). Jan 24 00:57:07.206095 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 50004 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:07.209067 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:07.220062 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:57:07.226552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:57:07.229224 systemd-logind[1562]: New session 1 of user core. Jan 24 00:57:07.246518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:57:07.253643 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:57:07.260521 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:57:07.400975 systemd[1695]: Queued start job for default target default.target. Jan 24 00:57:07.401514 systemd[1695]: Created slice app.slice - User Application Slice. Jan 24 00:57:07.401543 systemd[1695]: Reached target paths.target - Paths. Jan 24 00:57:07.401561 systemd[1695]: Reached target timers.target - Timers. Jan 24 00:57:07.411521 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:57:07.425703 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:57:07.425849 systemd[1695]: Reached target sockets.target - Sockets. Jan 24 00:57:07.425875 systemd[1695]: Reached target basic.target - Basic System. Jan 24 00:57:07.425946 systemd[1695]: Reached target default.target - Main User Target. Jan 24 00:57:07.426010 systemd[1695]: Startup finished in 156ms. Jan 24 00:57:07.426102 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:57:07.428466 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:57:07.493709 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:50014.service - OpenSSH per-connection server daemon (10.0.0.1:50014). Jan 24 00:57:07.541549 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 50014 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:07.544245 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:07.552485 systemd-logind[1562]: New session 2 of user core. Jan 24 00:57:07.562702 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:57:07.628134 sshd[1707]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:07.640838 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:50018.service - OpenSSH per-connection server daemon (10.0.0.1:50018). Jan 24 00:57:07.641730 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:50014.service: Deactivated successfully. Jan 24 00:57:07.645749 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:57:07.646920 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:57:07.649558 systemd-logind[1562]: Removed session 2. Jan 24 00:57:07.678125 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 50018 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:07.679954 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:07.685753 systemd-logind[1562]: New session 3 of user core. Jan 24 00:57:07.706839 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:57:07.763473 sshd[1712]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:07.770617 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:50020.service - OpenSSH per-connection server daemon (10.0.0.1:50020). Jan 24 00:57:07.771112 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:50018.service: Deactivated successfully. Jan 24 00:57:07.773705 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:57:07.774138 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:57:07.776614 systemd-logind[1562]: Removed session 3. Jan 24 00:57:07.815226 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 50020 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:07.817166 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:07.824633 systemd-logind[1562]: New session 4 of user core. Jan 24 00:57:07.837952 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:57:07.904233 sshd[1720]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:07.914671 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:50024.service - OpenSSH per-connection server daemon (10.0.0.1:50024). Jan 24 00:57:07.915403 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:50020.service: Deactivated successfully. Jan 24 00:57:07.917828 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:57:07.919021 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:57:07.921980 systemd-logind[1562]: Removed session 4. Jan 24 00:57:07.950712 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 50024 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:07.952579 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:07.958664 systemd-logind[1562]: New session 5 of user core. Jan 24 00:57:07.968685 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:57:08.041488 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:57:08.042125 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:57:08.071950 sudo[1735]: pam_unix(sudo:session): session closed for user root Jan 24 00:57:08.074962 sshd[1728]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:08.082593 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:50028.service - OpenSSH per-connection server daemon (10.0.0.1:50028). Jan 24 00:57:08.083215 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:50024.service: Deactivated successfully. Jan 24 00:57:08.086161 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:57:08.087665 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:57:08.088679 systemd-logind[1562]: Removed session 5. Jan 24 00:57:08.123694 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 50028 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:08.126402 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:08.133481 systemd-logind[1562]: New session 6 of user core. Jan 24 00:57:08.145625 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:57:08.209698 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:57:08.210078 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:57:08.216937 sudo[1745]: pam_unix(sudo:session): session closed for user root Jan 24 00:57:08.226870 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:57:08.227500 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:57:08.257744 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:57:08.260713 auditctl[1748]: No rules Jan 24 00:57:08.261454 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:57:08.261922 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:57:08.266677 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:57:08.315189 augenrules[1767]: No rules Jan 24 00:57:08.317106 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:57:08.318667 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 24 00:57:08.321566 sshd[1737]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:08.335784 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). Jan 24 00:57:08.337563 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:50028.service: Deactivated successfully. Jan 24 00:57:08.340669 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:57:08.341855 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:57:08.343499 systemd-logind[1562]: Removed session 6. Jan 24 00:57:08.372359 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:08.374108 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:08.380846 systemd-logind[1562]: New session 7 of user core. Jan 24 00:57:08.391053 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:57:08.450740 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:57:08.451115 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:57:08.844653 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:57:08.845029 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:57:09.192372 dockerd[1798]: time="2026-01-24T00:57:09.192131111Z" level=info msg="Starting up" Jan 24 00:57:09.520433 dockerd[1798]: time="2026-01-24T00:57:09.520201098Z" level=info msg="Loading containers: start." Jan 24 00:57:09.708392 kernel: Initializing XFRM netlink socket Jan 24 00:57:09.841869 systemd-networkd[1258]: docker0: Link UP Jan 24 00:57:09.873954 dockerd[1798]: time="2026-01-24T00:57:09.873879412Z" level=info msg="Loading containers: done." Jan 24 00:57:09.896002 dockerd[1798]: time="2026-01-24T00:57:09.895877565Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:57:09.896147 dockerd[1798]: time="2026-01-24T00:57:09.896023719Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:57:09.896147 dockerd[1798]: time="2026-01-24T00:57:09.896128486Z" level=info msg="Daemon has completed initialization" Jan 24 00:57:09.954782 dockerd[1798]: time="2026-01-24T00:57:09.954556641Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:57:09.956557 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:57:10.789089 containerd[1593]: time="2026-01-24T00:57:10.789019732Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:57:11.369035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562168995.mount: Deactivated successfully. Jan 24 00:57:12.747853 containerd[1593]: time="2026-01-24T00:57:12.747721571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:12.748883 containerd[1593]: time="2026-01-24T00:57:12.748805256Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 24 00:57:12.750052 containerd[1593]: time="2026-01-24T00:57:12.749981448Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:12.753519 containerd[1593]: time="2026-01-24T00:57:12.753396621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:12.754700 containerd[1593]: time="2026-01-24T00:57:12.754607029Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.965549227s" Jan 24 00:57:12.754700 containerd[1593]: time="2026-01-24T00:57:12.754660651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:57:12.755679 containerd[1593]: time="2026-01-24T00:57:12.755572508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:57:14.125558 containerd[1593]: time="2026-01-24T00:57:14.125401984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:14.126610 containerd[1593]: time="2026-01-24T00:57:14.126455686Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 24 00:57:14.127981 containerd[1593]: time="2026-01-24T00:57:14.127803987Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:14.131845 containerd[1593]: time="2026-01-24T00:57:14.131749502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:14.133349 containerd[1593]: time="2026-01-24T00:57:14.133223995Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.377622427s" Jan 24 00:57:14.133398 containerd[1593]: time="2026-01-24T00:57:14.133348593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:57:14.134387 containerd[1593]: time="2026-01-24T00:57:14.134343418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:57:15.325621 containerd[1593]: time="2026-01-24T00:57:15.325466116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:15.326932 containerd[1593]: time="2026-01-24T00:57:15.326793652Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 24 00:57:15.328306 containerd[1593]: time="2026-01-24T00:57:15.328106358Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:15.331796 containerd[1593]: time="2026-01-24T00:57:15.331713157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:15.332648 containerd[1593]: time="2026-01-24T00:57:15.332540700Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.198142734s" Jan 24 00:57:15.332648 containerd[1593]: time="2026-01-24T00:57:15.332594163Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:57:15.333396 containerd[1593]: time="2026-01-24T00:57:15.333374530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:57:16.413072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:57:16.424635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:57:16.434710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93491943.mount: Deactivated successfully. Jan 24 00:57:16.630473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:57:16.642101 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:57:16.732931 kubelet[2033]: E0124 00:57:16.732743 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:57:16.741594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:57:16.741922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:57:17.171395 containerd[1593]: time="2026-01-24T00:57:17.171124011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:17.172990 containerd[1593]: time="2026-01-24T00:57:17.172797348Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:57:17.174748 containerd[1593]: time="2026-01-24T00:57:17.174617666Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:17.177507 containerd[1593]: time="2026-01-24T00:57:17.177473827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:17.179141 containerd[1593]: time="2026-01-24T00:57:17.178942334Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.84547504s" Jan 24 00:57:17.179141 containerd[1593]: time="2026-01-24T00:57:17.179049859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:57:17.180080 containerd[1593]: time="2026-01-24T00:57:17.179894296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:57:17.853502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041729895.mount: Deactivated successfully. Jan 24 00:57:18.891823 kernel: hrtimer: interrupt took 4374449 ns Jan 24 00:57:20.845537 containerd[1593]: time="2026-01-24T00:57:20.845393609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:20.846805 containerd[1593]: time="2026-01-24T00:57:20.846730811Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 24 00:57:20.848115 containerd[1593]: time="2026-01-24T00:57:20.848071213Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:20.851747 containerd[1593]: time="2026-01-24T00:57:20.851619106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:20.853424 containerd[1593]: time="2026-01-24T00:57:20.853379707Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.673392601s" Jan 24 00:57:20.853541 containerd[1593]: time="2026-01-24T00:57:20.853430724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:57:20.854744 containerd[1593]: time="2026-01-24T00:57:20.854617424Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:57:21.434037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776520599.mount: Deactivated successfully. Jan 24 00:57:21.485884 containerd[1593]: time="2026-01-24T00:57:21.484696712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:21.487998 containerd[1593]: time="2026-01-24T00:57:21.487881080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:57:21.490087 containerd[1593]: time="2026-01-24T00:57:21.489953758Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:21.493160 containerd[1593]: time="2026-01-24T00:57:21.493081270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:21.494975 containerd[1593]: time="2026-01-24T00:57:21.494846939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 640.201417ms" Jan 24 00:57:21.494975 containerd[1593]: time="2026-01-24T00:57:21.494953481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:57:21.496047 containerd[1593]: time="2026-01-24T00:57:21.495899930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:57:22.129943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521229299.mount: Deactivated successfully. Jan 24 00:57:24.448336 containerd[1593]: time="2026-01-24T00:57:24.448136587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:24.449651 containerd[1593]: time="2026-01-24T00:57:24.449534136Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 24 00:57:24.451426 containerd[1593]: time="2026-01-24T00:57:24.451320734Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:24.456167 containerd[1593]: time="2026-01-24T00:57:24.456011593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:24.458662 containerd[1593]: time="2026-01-24T00:57:24.458545112Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.962572248s" Jan 24 00:57:24.458662 containerd[1593]: time="2026-01-24T00:57:24.458609840Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:57:26.913043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:57:26.923668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:57:27.063496 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:57:27.063865 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:57:27.064598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:57:27.076882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:57:27.112985 systemd[1]: Reloading requested from client PID 2188 ('systemctl') (unit session-7.scope)... Jan 24 00:57:27.113052 systemd[1]: Reloading... Jan 24 00:57:27.220729 zram_generator::config[2227]: No configuration found. Jan 24 00:57:27.407466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:57:27.489343 systemd[1]: Reloading finished in 375 ms. Jan 24 00:57:27.557077 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:57:27.557364 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:57:27.558072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:57:27.572365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:57:27.755122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:57:27.769998 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:57:27.848136 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:57:27.848136 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:57:27.848136 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:57:27.849817 kubelet[2288]: I0124 00:57:27.848745 2288 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:57:28.146861 kubelet[2288]: I0124 00:57:28.146609 2288 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:57:28.146861 kubelet[2288]: I0124 00:57:28.146683 2288 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:57:28.147023 kubelet[2288]: I0124 00:57:28.146974 2288 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:57:28.177577 kubelet[2288]: E0124 00:57:28.177489 2288 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:28.178868 kubelet[2288]: I0124 00:57:28.178815 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:57:28.191457 kubelet[2288]: E0124 00:57:28.189399 2288 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:57:28.191457 kubelet[2288]: I0124 00:57:28.189483 2288 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:57:28.199023 kubelet[2288]: I0124 00:57:28.198841 2288 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:57:28.201521 kubelet[2288]: I0124 00:57:28.201362 2288 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:57:28.201744 kubelet[2288]: I0124 00:57:28.201450 2288 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:57:28.201744 kubelet[2288]: I0124 00:57:28.201722 2288 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:57:28.201744 kubelet[2288]: I0124 00:57:28.201737 2288 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:57:28.201917 kubelet[2288]: I0124 00:57:28.201900 2288 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:57:28.206217 kubelet[2288]: I0124 00:57:28.206047 2288 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:57:28.206217 kubelet[2288]: I0124 00:57:28.206123 2288 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:57:28.206217 kubelet[2288]: I0124 00:57:28.206150 2288 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:57:28.206217 kubelet[2288]: I0124 00:57:28.206165 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:57:28.209687 kubelet[2288]: W0124 00:57:28.209634 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 24 00:57:28.209954 kubelet[2288]: E0124 00:57:28.209807 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:28.210769 kubelet[2288]: I0124 00:57:28.210686 2288 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:57:28.210826 kubelet[2288]: W0124 00:57:28.210757 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 24 00:57:28.210826 kubelet[2288]: E0124 00:57:28.210812 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:28.211551 kubelet[2288]: I0124 00:57:28.211411 2288 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:57:28.212635 kubelet[2288]: W0124 00:57:28.212556 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:57:28.215390 kubelet[2288]: I0124 00:57:28.215340 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:57:28.215455 kubelet[2288]: I0124 00:57:28.215412 2288 server.go:1287] "Started kubelet" Jan 24 00:57:28.217712 kubelet[2288]: I0124 00:57:28.217592 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:57:28.218218 kubelet[2288]: I0124 00:57:28.217897 2288 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:57:28.219122 kubelet[2288]: I0124 00:57:28.218966 2288 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:57:28.219869 kubelet[2288]: I0124 00:57:28.219740 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:57:28.219869 kubelet[2288]: I0124 00:57:28.219803 2288 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:57:28.219869 kubelet[2288]: I0124 00:57:28.219839 2288 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:57:28.220408 kubelet[2288]: W0124 00:57:28.220344 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 24 00:57:28.220408 kubelet[2288]: E0124 00:57:28.220413 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:28.222354 kubelet[2288]: E0124 00:57:28.220396 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84cdd5ba3464 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:57:28.215385188 +0000 UTC m=+0.438748820,LastTimestamp:2026-01-24 00:57:28.215385188 +0000 UTC m=+0.438748820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:57:28.222354 kubelet[2288]: I0124 00:57:28.221783 2288 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:57:28.222354 kubelet[2288]: I0124 00:57:28.221856 2288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:57:28.223460 kubelet[2288]: I0124 00:57:28.223363 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:57:28.223695 kubelet[2288]: I0124 00:57:28.223628 2288 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:57:28.223880 kubelet[2288]: I0124 00:57:28.223856 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:57:28.224417 kubelet[2288]: E0124 00:57:28.224198 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:28.224899 kubelet[2288]: E0124 00:57:28.224790 2288 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:57:28.224973 kubelet[2288]: I0124 00:57:28.224908 2288 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:57:28.231687 kubelet[2288]: E0124 00:57:28.231649 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Jan 24 00:57:28.261061 kubelet[2288]: I0124 00:57:28.260984 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:57:28.261061 kubelet[2288]: I0124 00:57:28.261034 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:57:28.261061 kubelet[2288]: I0124 00:57:28.261051 2288 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:57:28.261564 kubelet[2288]: I0124 00:57:28.261386 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:57:28.264062 kubelet[2288]: I0124 00:57:28.264011 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:57:28.264588 kubelet[2288]: I0124 00:57:28.264483 2288 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:57:28.264588 kubelet[2288]: I0124 00:57:28.264508 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:57:28.264588 kubelet[2288]: I0124 00:57:28.264517 2288 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:57:28.264588 kubelet[2288]: E0124 00:57:28.264563 2288 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:57:28.324635 kubelet[2288]: E0124 00:57:28.324443 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:28.350177 kubelet[2288]: I0124 00:57:28.349957 2288 policy_none.go:49] "None policy: Start" Jan 24 00:57:28.350177 kubelet[2288]: I0124 00:57:28.350056 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:57:28.350177 kubelet[2288]: I0124 00:57:28.350083 2288 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:57:28.350942 kubelet[2288]: W0124 00:57:28.350802 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 24 00:57:28.350942 kubelet[2288]: E0124 00:57:28.350923 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:28.358588 kubelet[2288]: I0124 00:57:28.358516 2288 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:57:28.358819 kubelet[2288]: I0124 00:57:28.358720 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:57:28.358819 kubelet[2288]: I0124 00:57:28.358732 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:57:28.360341 kubelet[2288]: I0124 00:57:28.360160 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:57:28.360937 kubelet[2288]: E0124 00:57:28.360907 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:57:28.360983 kubelet[2288]: E0124 00:57:28.360940 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:57:28.373469 kubelet[2288]: E0124 00:57:28.371953 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:28.373834 kubelet[2288]: E0124 00:57:28.373793 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:28.378895 kubelet[2288]: E0124 00:57:28.378800 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:28.433026 kubelet[2288]: E0124 00:57:28.432974 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Jan 24 00:57:28.462031 kubelet[2288]: I0124 00:57:28.461989 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:57:28.462558 kubelet[2288]: E0124 00:57:28.462428 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jan 24 00:57:28.521649 kubelet[2288]: I0124 00:57:28.521460 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:28.521649 kubelet[2288]: I0124 00:57:28.521575 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa02b7fbde659271b0351fceda547ebb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa02b7fbde659271b0351fceda547ebb\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:28.521839 kubelet[2288]: I0124 00:57:28.521674 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa02b7fbde659271b0351fceda547ebb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa02b7fbde659271b0351fceda547ebb\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:28.521839 kubelet[2288]: I0124 00:57:28.521704 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:28.521839 kubelet[2288]: I0124 00:57:28.521727 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:28.521839 kubelet[2288]: I0124 00:57:28.521749 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:28.521839 kubelet[2288]: I0124 00:57:28.521790 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa02b7fbde659271b0351fceda547ebb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa02b7fbde659271b0351fceda547ebb\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:28.522018 kubelet[2288]: I0124 00:57:28.521811 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:28.522018 kubelet[2288]: I0124 00:57:28.521834 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:28.664722 kubelet[2288]: I0124 00:57:28.664648 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:57:28.665362 kubelet[2288]: E0124 00:57:28.665198 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jan 24 00:57:28.673695 kubelet[2288]: E0124 00:57:28.673576 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:28.674501 kubelet[2288]: E0124 00:57:28.674417 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:28.675026 containerd[1593]: time="2026-01-24T00:57:28.674864005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa02b7fbde659271b0351fceda547ebb,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:28.675588 containerd[1593]: time="2026-01-24T00:57:28.675079492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:28.679890 kubelet[2288]: E0124 00:57:28.679817 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:28.680484 containerd[1593]: time="2026-01-24T00:57:28.680396964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:28.766455 kubelet[2288]: E0124 00:57:28.766021 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84cdd5ba3464 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:57:28.215385188 +0000 UTC m=+0.438748820,LastTimestamp:2026-01-24 00:57:28.215385188 +0000 UTC m=+0.438748820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:57:28.834567 kubelet[2288]: E0124 00:57:28.834447 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Jan 24 00:57:29.068403 kubelet[2288]: I0124 00:57:29.068176 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:57:29.068976 kubelet[2288]: E0124 00:57:29.068672 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jan 24 00:57:29.108810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516225091.mount: Deactivated successfully. Jan 24 00:57:29.118408 containerd[1593]: time="2026-01-24T00:57:29.118161684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:57:29.122839 containerd[1593]: time="2026-01-24T00:57:29.122740652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:57:29.124108 containerd[1593]: time="2026-01-24T00:57:29.123972114Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:57:29.125757 containerd[1593]: time="2026-01-24T00:57:29.125595756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:57:29.127896 containerd[1593]: time="2026-01-24T00:57:29.127560294Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:57:29.131901 containerd[1593]: time="2026-01-24T00:57:29.131568403Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:57:29.131901 containerd[1593]: time="2026-01-24T00:57:29.131712152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:57:29.136449 containerd[1593]: time="2026-01-24T00:57:29.136376403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:57:29.138647 containerd[1593]: time="2026-01-24T00:57:29.138389725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 463.365106ms" Jan 24 00:57:29.142598 containerd[1593]: time="2026-01-24T00:57:29.142546045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.409085ms" Jan 24 00:57:29.143894 containerd[1593]: time="2026-01-24T00:57:29.143758479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 463.283218ms" Jan 24 00:57:29.250601 kubelet[2288]: W0124 00:57:29.250431 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 24 00:57:29.250601 kubelet[2288]: E0124 00:57:29.250536 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:29.286977 containerd[1593]: time="2026-01-24T00:57:29.286570578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:29.286977 containerd[1593]: time="2026-01-24T00:57:29.286669143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:29.286977 containerd[1593]: time="2026-01-24T00:57:29.286688732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:29.287353 containerd[1593]: time="2026-01-24T00:57:29.286987686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:29.288392 containerd[1593]: time="2026-01-24T00:57:29.288030786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:29.288392 containerd[1593]: time="2026-01-24T00:57:29.288149743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:29.288392 containerd[1593]: time="2026-01-24T00:57:29.288160743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:29.288772 containerd[1593]: time="2026-01-24T00:57:29.288305974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:29.293052 containerd[1593]: time="2026-01-24T00:57:29.292900181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:29.293178 containerd[1593]: time="2026-01-24T00:57:29.292946477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:29.293178 containerd[1593]: time="2026-01-24T00:57:29.292959573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:29.293408 containerd[1593]: time="2026-01-24T00:57:29.293059622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:29.384664 kubelet[2288]: W0124 00:57:29.383646 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 24 00:57:29.385189 kubelet[2288]: E0124 00:57:29.384801 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:29.399215 containerd[1593]: time="2026-01-24T00:57:29.399104933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b56f54b4e13bfb1a9f5e137167867b1b839688917a12827d4f9a946091e7b07\"" Jan 24 00:57:29.399611 containerd[1593]: time="2026-01-24T00:57:29.399534937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa02b7fbde659271b0351fceda547ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"24652e44f7a6e1b24662cd72065ae1794d4e8ce238ab25fc777ea8cb0050b628\"" Jan 24 00:57:29.399611 containerd[1593]: time="2026-01-24T00:57:29.399587446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c994ed64db8498477c4de3517573da7b524f13a730b9da0c6af1c4eb352115ac\"" Jan 24 00:57:29.400705 kubelet[2288]: E0124 00:57:29.400641 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:29.401405 kubelet[2288]: E0124 00:57:29.401220 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:29.401441 kubelet[2288]: E0124 00:57:29.401426 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:29.404942 containerd[1593]: time="2026-01-24T00:57:29.404871729Z" level=info msg="CreateContainer within sandbox \"c994ed64db8498477c4de3517573da7b524f13a730b9da0c6af1c4eb352115ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:57:29.405222 containerd[1593]: time="2026-01-24T00:57:29.405160316Z" level=info msg="CreateContainer within sandbox \"24652e44f7a6e1b24662cd72065ae1794d4e8ce238ab25fc777ea8cb0050b628\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:57:29.405410 containerd[1593]: time="2026-01-24T00:57:29.404880188Z" level=info msg="CreateContainer within sandbox \"9b56f54b4e13bfb1a9f5e137167867b1b839688917a12827d4f9a946091e7b07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:57:29.430946 containerd[1593]: time="2026-01-24T00:57:29.430798210Z" level=info msg="CreateContainer within sandbox \"9b56f54b4e13bfb1a9f5e137167867b1b839688917a12827d4f9a946091e7b07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e720081225d26c8a4b10503a7f623e651c6fdc3c17a633ea07b0dbbc49f0593\"" Jan 24 00:57:29.432202 containerd[1593]: time="2026-01-24T00:57:29.432158488Z" level=info msg="StartContainer for \"2e720081225d26c8a4b10503a7f623e651c6fdc3c17a633ea07b0dbbc49f0593\"" Jan 24 00:57:29.433685 containerd[1593]: time="2026-01-24T00:57:29.433602352Z" level=info msg="CreateContainer within sandbox \"24652e44f7a6e1b24662cd72065ae1794d4e8ce238ab25fc777ea8cb0050b628\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07c1e3b8b2c550c9a34927aec27908425f0f1bebeaa5cc3dd66a50d5ebf4185d\"" Jan 24 00:57:29.434354 containerd[1593]: time="2026-01-24T00:57:29.434229192Z" level=info msg="StartContainer for \"07c1e3b8b2c550c9a34927aec27908425f0f1bebeaa5cc3dd66a50d5ebf4185d\"" Jan 24 00:57:29.437593 containerd[1593]: time="2026-01-24T00:57:29.437515493Z" level=info msg="CreateContainer within sandbox \"c994ed64db8498477c4de3517573da7b524f13a730b9da0c6af1c4eb352115ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59db7b61a7e4e9cfa918183512dda8f0503a68f9f7a9eb241832d69440385ee2\"" Jan 24 00:57:29.438080 containerd[1593]: time="2026-01-24T00:57:29.437948627Z" level=info msg="StartContainer for \"59db7b61a7e4e9cfa918183512dda8f0503a68f9f7a9eb241832d69440385ee2\"" Jan 24 00:57:29.472463 kubelet[2288]: W0124 00:57:29.472335 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jan 24 00:57:29.472463 kubelet[2288]: E0124 00:57:29.472433 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:57:29.540931 containerd[1593]: time="2026-01-24T00:57:29.539801292Z" level=info msg="StartContainer for \"07c1e3b8b2c550c9a34927aec27908425f0f1bebeaa5cc3dd66a50d5ebf4185d\" returns successfully" Jan 24 00:57:29.540931 containerd[1593]: time="2026-01-24T00:57:29.539884661Z" level=info msg="StartContainer for \"2e720081225d26c8a4b10503a7f623e651c6fdc3c17a633ea07b0dbbc49f0593\" returns successfully" Jan 24 00:57:29.570973 containerd[1593]: time="2026-01-24T00:57:29.570741994Z" level=info msg="StartContainer for \"59db7b61a7e4e9cfa918183512dda8f0503a68f9f7a9eb241832d69440385ee2\" returns successfully" Jan 24 00:57:29.873973 kubelet[2288]: I0124 00:57:29.873468 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:57:30.282981 kubelet[2288]: E0124 00:57:30.282917 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:30.283442 kubelet[2288]: E0124 00:57:30.283092 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:30.287000 kubelet[2288]: E0124 00:57:30.286943 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:30.287153 kubelet[2288]: E0124 00:57:30.287099 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:30.293175 kubelet[2288]: E0124 00:57:30.293102 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:30.296137 kubelet[2288]: E0124 00:57:30.296060 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:31.106466 kubelet[2288]: E0124 00:57:31.106400 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:57:31.201481 kubelet[2288]: I0124 00:57:31.198989 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:57:31.201481 kubelet[2288]: E0124 00:57:31.199027 2288 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:57:31.213838 kubelet[2288]: E0124 00:57:31.213622 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:31.293811 kubelet[2288]: E0124 00:57:31.293663 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:31.293811 kubelet[2288]: E0124 00:57:31.293769 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:57:31.294406 kubelet[2288]: E0124 00:57:31.293848 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:31.294406 kubelet[2288]: E0124 00:57:31.293906 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:31.314682 kubelet[2288]: E0124 00:57:31.314588 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:31.415643 kubelet[2288]: E0124 00:57:31.415525 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:31.515735 kubelet[2288]: E0124 00:57:31.515701 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:31.616444 kubelet[2288]: E0124 00:57:31.616380 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:31.717629 kubelet[2288]: E0124 00:57:31.717443 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:31.818459 kubelet[2288]: E0124 00:57:31.818337 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:31.919132 kubelet[2288]: E0124 00:57:31.918966 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:32.020559 kubelet[2288]: E0124 00:57:32.020347 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:32.121508 kubelet[2288]: E0124 00:57:32.121461 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:32.221927 kubelet[2288]: E0124 00:57:32.221798 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:32.322670 kubelet[2288]: E0124 00:57:32.322235 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:32.423549 kubelet[2288]: E0124 00:57:32.423357 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:32.525499 kubelet[2288]: I0124 00:57:32.525425 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:32.538144 kubelet[2288]: I0124 00:57:32.538059 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:32.545724 kubelet[2288]: I0124 00:57:32.545683 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:32.655889 kubelet[2288]: I0124 00:57:32.655640 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:32.662106 kubelet[2288]: E0124 00:57:32.662005 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:32.662555 kubelet[2288]: E0124 00:57:32.662415 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:33.213377 kubelet[2288]: I0124 00:57:33.213189 2288 apiserver.go:52] "Watching apiserver" Jan 24 00:57:33.217042 kubelet[2288]: E0124 00:57:33.216941 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:33.217114 kubelet[2288]: E0124 00:57:33.217055 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:33.220638 kubelet[2288]: I0124 00:57:33.220428 2288 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:57:33.296997 kubelet[2288]: E0124 00:57:33.296850 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:33.620877 kubelet[2288]: E0124 00:57:33.620672 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:33.726853 systemd[1]: Reloading requested from client PID 2569 ('systemctl') (unit session-7.scope)... Jan 24 00:57:33.726908 systemd[1]: Reloading... Jan 24 00:57:33.820410 zram_generator::config[2605]: No configuration found. Jan 24 00:57:34.007726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:57:34.088130 systemd[1]: Reloading finished in 360 ms. Jan 24 00:57:34.146445 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:57:34.157136 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:57:34.157691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:57:34.167568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:57:34.343863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:57:34.351908 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:57:34.435628 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:57:34.435628 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:57:34.435628 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:57:34.436077 kubelet[2664]: I0124 00:57:34.435691 2664 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:57:34.446851 kubelet[2664]: I0124 00:57:34.446639 2664 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:57:34.446851 kubelet[2664]: I0124 00:57:34.446670 2664 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:57:34.447114 kubelet[2664]: I0124 00:57:34.447048 2664 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:57:34.452974 kubelet[2664]: I0124 00:57:34.452907 2664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:57:34.455359 kubelet[2664]: I0124 00:57:34.455153 2664 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:57:34.460225 kubelet[2664]: E0124 00:57:34.460183 2664 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:57:34.460225 kubelet[2664]: I0124 00:57:34.460204 2664 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:57:34.465584 kubelet[2664]: I0124 00:57:34.465521 2664 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:57:34.466211 kubelet[2664]: I0124 00:57:34.466081 2664 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:57:34.466364 kubelet[2664]: I0124 00:57:34.466148 2664 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:57:34.466488 kubelet[2664]: I0124 00:57:34.466369 2664 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:57:34.466488 kubelet[2664]: I0124 00:57:34.466383 2664 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:57:34.466488 kubelet[2664]: I0124 00:57:34.466428 2664 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:57:34.466656 kubelet[2664]: I0124 00:57:34.466591 2664 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:57:34.466656 kubelet[2664]: I0124 00:57:34.466647 2664 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:57:34.466704 kubelet[2664]: I0124 00:57:34.466663 2664 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:57:34.466704 kubelet[2664]: I0124 00:57:34.466673 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:57:34.469568 kubelet[2664]: I0124 00:57:34.467647 2664 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:57:34.469568 kubelet[2664]: I0124 00:57:34.468094 2664 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:57:34.469568 kubelet[2664]: I0124 00:57:34.468651 2664 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:57:34.469568 kubelet[2664]: I0124 00:57:34.468677 2664 server.go:1287] "Started kubelet" Jan 24 00:57:34.470065 kubelet[2664]: I0124 00:57:34.470019 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:57:34.470487 kubelet[2664]: I0124 00:57:34.470471 2664 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:57:34.470607 kubelet[2664]: I0124 00:57:34.470588 2664 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:57:34.474686 kubelet[2664]: I0124 00:57:34.474631 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:57:34.479098 kubelet[2664]: I0124 00:57:34.479030 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:57:34.479933 kubelet[2664]: I0124 00:57:34.479884 2664 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:57:34.480092 kubelet[2664]: E0124 00:57:34.480049 2664 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:57:34.480930 kubelet[2664]: I0124 00:57:34.480876 2664 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:57:34.481092 kubelet[2664]: I0124 00:57:34.481034 2664 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:57:34.483330 kubelet[2664]: I0124 00:57:34.483055 2664 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:57:34.484708 kubelet[2664]: I0124 00:57:34.484610 2664 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:57:34.484770 kubelet[2664]: I0124 00:57:34.484748 2664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:57:34.492687 kubelet[2664]: I0124 00:57:34.492660 2664 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:57:34.498233 kubelet[2664]: E0124 00:57:34.498211 2664 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:57:34.520045 kubelet[2664]: I0124 00:57:34.520009 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:57:34.522557 kubelet[2664]: I0124 00:57:34.522467 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:57:34.522557 kubelet[2664]: I0124 00:57:34.522532 2664 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:57:34.522557 kubelet[2664]: I0124 00:57:34.522551 2664 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:57:34.522557 kubelet[2664]: I0124 00:57:34.522559 2664 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:57:34.522709 kubelet[2664]: E0124 00:57:34.522606 2664 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:57:34.566911 kubelet[2664]: I0124 00:57:34.566827 2664 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:57:34.566911 kubelet[2664]: I0124 00:57:34.566904 2664 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:57:34.567079 kubelet[2664]: I0124 00:57:34.566929 2664 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:57:34.567107 kubelet[2664]: I0124 00:57:34.567095 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:57:34.567131 kubelet[2664]: I0124 00:57:34.567106 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:57:34.567131 kubelet[2664]: I0124 00:57:34.567125 2664 policy_none.go:49] "None policy: Start" Jan 24 00:57:34.567165 kubelet[2664]: I0124 00:57:34.567135 2664 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:57:34.567165 kubelet[2664]: I0124 00:57:34.567147 2664 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:57:34.567428 kubelet[2664]: I0124 00:57:34.567386 2664 state_mem.go:75] "Updated machine memory state" Jan 24 00:57:34.569644 kubelet[2664]: I0124 00:57:34.569571 2664 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:57:34.572234 kubelet[2664]: I0124 00:57:34.571840 2664 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:57:34.572234 kubelet[2664]: I0124 00:57:34.571918 2664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:57:34.572234 kubelet[2664]: I0124 00:57:34.572173 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:57:34.574226 kubelet[2664]: E0124 00:57:34.574151 2664 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:57:34.624184 kubelet[2664]: I0124 00:57:34.623964 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:34.624372 kubelet[2664]: I0124 00:57:34.624194 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:34.624741 kubelet[2664]: I0124 00:57:34.624452 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:34.633155 kubelet[2664]: E0124 00:57:34.632981 2664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:34.636213 kubelet[2664]: E0124 00:57:34.636180 2664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:34.637199 kubelet[2664]: E0124 00:57:34.636688 2664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:34.681862 kubelet[2664]: I0124 00:57:34.681757 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa02b7fbde659271b0351fceda547ebb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa02b7fbde659271b0351fceda547ebb\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:34.681862 kubelet[2664]: I0124 00:57:34.681847 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:34.681862 kubelet[2664]: I0124 00:57:34.681870 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:34.682153 kubelet[2664]: I0124 00:57:34.681887 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:34.682153 kubelet[2664]: I0124 00:57:34.681904 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa02b7fbde659271b0351fceda547ebb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa02b7fbde659271b0351fceda547ebb\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:34.682153 kubelet[2664]: I0124 00:57:34.681919 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa02b7fbde659271b0351fceda547ebb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa02b7fbde659271b0351fceda547ebb\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:34.682153 kubelet[2664]: I0124 00:57:34.681934 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:34.682153 kubelet[2664]: I0124 00:57:34.681947 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:34.683434 kubelet[2664]: I0124 00:57:34.682020 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:57:34.683434 kubelet[2664]: I0124 00:57:34.683414 2664 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:57:34.694136 kubelet[2664]: I0124 00:57:34.694098 2664 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:57:34.694136 kubelet[2664]: I0124 00:57:34.694178 2664 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:57:34.934656 kubelet[2664]: E0124 00:57:34.934500 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:34.938551 kubelet[2664]: E0124 00:57:34.937532 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:34.939598 kubelet[2664]: E0124 00:57:34.939459 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:35.468545 kubelet[2664]: I0124 00:57:35.468386 2664 apiserver.go:52] "Watching apiserver" Jan 24 00:57:35.482046 kubelet[2664]: I0124 00:57:35.481744 2664 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:57:35.541636 kubelet[2664]: E0124 00:57:35.541361 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:35.541636 kubelet[2664]: I0124 00:57:35.541367 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:35.541636 kubelet[2664]: I0124 00:57:35.541491 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:35.548933 kubelet[2664]: E0124 00:57:35.548733 2664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:57:35.548933 kubelet[2664]: E0124 00:57:35.548842 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:35.549211 kubelet[2664]: E0124 00:57:35.549131 2664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:57:35.549347 kubelet[2664]: E0124 00:57:35.549315 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:35.587542 kubelet[2664]: I0124 00:57:35.587350 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.587330883 podStartE2EDuration="3.587330883s" podCreationTimestamp="2026-01-24 00:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:35.573398303 +0000 UTC m=+1.215544100" watchObservedRunningTime="2026-01-24 00:57:35.587330883 +0000 UTC m=+1.229476679" Jan 24 00:57:35.599065 kubelet[2664]: I0124 00:57:35.598984 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.598971263 podStartE2EDuration="3.598971263s" podCreationTimestamp="2026-01-24 00:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:35.598879791 +0000 UTC m=+1.241025588" watchObservedRunningTime="2026-01-24 00:57:35.598971263 +0000 UTC m=+1.241117059" Jan 24 00:57:35.599547 kubelet[2664]: I0124 00:57:35.599083 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5990746160000002 podStartE2EDuration="3.599074616s" podCreationTimestamp="2026-01-24 00:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:35.587598278 +0000 UTC m=+1.229744075" watchObservedRunningTime="2026-01-24 00:57:35.599074616 +0000 UTC m=+1.241220422" Jan 24 00:57:36.543855 kubelet[2664]: E0124 00:57:36.543798 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:36.544382 kubelet[2664]: E0124 00:57:36.543909 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:37.546195 kubelet[2664]: E0124 00:57:37.546026 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:37.546195 kubelet[2664]: E0124 00:57:37.546084 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:38.356076 kubelet[2664]: I0124 00:57:38.355922 2664 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:57:38.356442 containerd[1593]: time="2026-01-24T00:57:38.356343921Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:57:38.356851 kubelet[2664]: I0124 00:57:38.356776 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:57:39.116828 kubelet[2664]: I0124 00:57:39.116531 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9333b065-4215-497b-af65-69e8e0a720b3-kube-proxy\") pod \"kube-proxy-qn2qx\" (UID: \"9333b065-4215-497b-af65-69e8e0a720b3\") " pod="kube-system/kube-proxy-qn2qx" Jan 24 00:57:39.116828 kubelet[2664]: I0124 00:57:39.116695 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9333b065-4215-497b-af65-69e8e0a720b3-xtables-lock\") pod \"kube-proxy-qn2qx\" (UID: \"9333b065-4215-497b-af65-69e8e0a720b3\") " pod="kube-system/kube-proxy-qn2qx" Jan 24 00:57:39.116828 kubelet[2664]: I0124 00:57:39.116729 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9333b065-4215-497b-af65-69e8e0a720b3-lib-modules\") pod \"kube-proxy-qn2qx\" (UID: \"9333b065-4215-497b-af65-69e8e0a720b3\") " pod="kube-system/kube-proxy-qn2qx" Jan 24 00:57:39.116828 kubelet[2664]: I0124 00:57:39.116762 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxr9j\" (UniqueName: \"kubernetes.io/projected/9333b065-4215-497b-af65-69e8e0a720b3-kube-api-access-nxr9j\") pod \"kube-proxy-qn2qx\" (UID: \"9333b065-4215-497b-af65-69e8e0a720b3\") " pod="kube-system/kube-proxy-qn2qx" Jan 24 00:57:39.365808 kubelet[2664]: E0124 00:57:39.365574 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:39.366613 containerd[1593]: time="2026-01-24T00:57:39.366530243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn2qx,Uid:9333b065-4215-497b-af65-69e8e0a720b3,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:39.419352 containerd[1593]: time="2026-01-24T00:57:39.418845453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:39.419352 containerd[1593]: time="2026-01-24T00:57:39.419069894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:39.419352 containerd[1593]: time="2026-01-24T00:57:39.419094599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:39.419693 containerd[1593]: time="2026-01-24T00:57:39.419364459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:39.539565 containerd[1593]: time="2026-01-24T00:57:39.539041590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn2qx,Uid:9333b065-4215-497b-af65-69e8e0a720b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f88d7c5d8f9512f05fecf364b0180e011752c374af5b8b09a8ce5803624dd12\"" Jan 24 00:57:39.542382 kubelet[2664]: E0124 00:57:39.541799 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:39.552157 containerd[1593]: time="2026-01-24T00:57:39.551623228Z" level=info msg="CreateContainer within sandbox \"0f88d7c5d8f9512f05fecf364b0180e011752c374af5b8b09a8ce5803624dd12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:57:39.595417 containerd[1593]: time="2026-01-24T00:57:39.595166488Z" level=info msg="CreateContainer within sandbox \"0f88d7c5d8f9512f05fecf364b0180e011752c374af5b8b09a8ce5803624dd12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a746e99e7745214b09f4dbaee0f529e1c1bd596503bda5082f6818bb8a895a44\"" Jan 24 00:57:39.596461 containerd[1593]: time="2026-01-24T00:57:39.596155630Z" level=info msg="StartContainer for \"a746e99e7745214b09f4dbaee0f529e1c1bd596503bda5082f6818bb8a895a44\"" Jan 24 00:57:39.622816 kubelet[2664]: I0124 00:57:39.620822 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3006c03-8bd5-4fcc-be8f-9e1692268ed7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-k8t96\" (UID: \"d3006c03-8bd5-4fcc-be8f-9e1692268ed7\") " pod="tigera-operator/tigera-operator-7dcd859c48-k8t96" Jan 24 00:57:39.622816 kubelet[2664]: I0124 00:57:39.620925 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576hq\" (UniqueName: \"kubernetes.io/projected/d3006c03-8bd5-4fcc-be8f-9e1692268ed7-kube-api-access-576hq\") pod \"tigera-operator-7dcd859c48-k8t96\" (UID: \"d3006c03-8bd5-4fcc-be8f-9e1692268ed7\") " pod="tigera-operator/tigera-operator-7dcd859c48-k8t96" Jan 24 00:57:39.712747 containerd[1593]: time="2026-01-24T00:57:39.712497425Z" level=info msg="StartContainer for \"a746e99e7745214b09f4dbaee0f529e1c1bd596503bda5082f6818bb8a895a44\" returns successfully" Jan 24 00:57:39.866546 containerd[1593]: time="2026-01-24T00:57:39.866330119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-k8t96,Uid:d3006c03-8bd5-4fcc-be8f-9e1692268ed7,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:57:39.910479 containerd[1593]: time="2026-01-24T00:57:39.909881949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:39.910479 containerd[1593]: time="2026-01-24T00:57:39.909964537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:39.910479 containerd[1593]: time="2026-01-24T00:57:39.909988599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:39.910479 containerd[1593]: time="2026-01-24T00:57:39.910092994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:40.008503 containerd[1593]: time="2026-01-24T00:57:40.008159571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-k8t96,Uid:d3006c03-8bd5-4fcc-be8f-9e1692268ed7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3248b669d64ab21dbfdf5902d2cf6d5e1d546cb66de26b6416611d781b2102f8\"" Jan 24 00:57:40.011155 containerd[1593]: time="2026-01-24T00:57:40.010972064Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:57:40.242182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116893259.mount: Deactivated successfully. Jan 24 00:57:40.579997 kubelet[2664]: E0124 00:57:40.579914 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:41.196501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544878352.mount: Deactivated successfully. Jan 24 00:57:41.586047 kubelet[2664]: E0124 00:57:41.585836 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:42.973456 containerd[1593]: time="2026-01-24T00:57:42.973415372Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:42.974706 containerd[1593]: time="2026-01-24T00:57:42.974631591Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:57:42.976220 containerd[1593]: time="2026-01-24T00:57:42.976106627Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:42.979794 containerd[1593]: time="2026-01-24T00:57:42.979623001Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:42.980670 containerd[1593]: time="2026-01-24T00:57:42.980485223Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.96944366s" Jan 24 00:57:42.980670 containerd[1593]: time="2026-01-24T00:57:42.980561886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:57:42.983855 containerd[1593]: time="2026-01-24T00:57:42.983718972Z" level=info msg="CreateContainer within sandbox \"3248b669d64ab21dbfdf5902d2cf6d5e1d546cb66de26b6416611d781b2102f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:57:43.001079 containerd[1593]: time="2026-01-24T00:57:43.000908030Z" level=info msg="CreateContainer within sandbox \"3248b669d64ab21dbfdf5902d2cf6d5e1d546cb66de26b6416611d781b2102f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7e937a6335982aeac721962cc2bc533bf39d04d427d9523ded29402907850981\"" Jan 24 00:57:43.001927 containerd[1593]: time="2026-01-24T00:57:43.001868503Z" level=info msg="StartContainer for \"7e937a6335982aeac721962cc2bc533bf39d04d427d9523ded29402907850981\"" Jan 24 00:57:43.044792 systemd[1]: run-containerd-runc-k8s.io-7e937a6335982aeac721962cc2bc533bf39d04d427d9523ded29402907850981-runc.Bh7yfN.mount: Deactivated successfully. Jan 24 00:57:43.084610 containerd[1593]: time="2026-01-24T00:57:43.083554984Z" level=info msg="StartContainer for \"7e937a6335982aeac721962cc2bc533bf39d04d427d9523ded29402907850981\" returns successfully" Jan 24 00:57:43.424798 kubelet[2664]: E0124 00:57:43.424752 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:43.442308 kubelet[2664]: I0124 00:57:43.442014 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qn2qx" podStartSLOduration=4.441995606 podStartE2EDuration="4.441995606s" podCreationTimestamp="2026-01-24 00:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:40.59075816 +0000 UTC m=+6.232903957" watchObservedRunningTime="2026-01-24 00:57:43.441995606 +0000 UTC m=+9.084141402" Jan 24 00:57:43.594813 kubelet[2664]: E0124 00:57:43.594676 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:45.171748 kubelet[2664]: E0124 00:57:45.171056 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:45.204183 kubelet[2664]: I0124 00:57:45.204034 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-k8t96" podStartSLOduration=3.232205027 podStartE2EDuration="6.204011857s" podCreationTimestamp="2026-01-24 00:57:39 +0000 UTC" firstStartedPulling="2026-01-24 00:57:40.010140141 +0000 UTC m=+5.652285936" lastFinishedPulling="2026-01-24 00:57:42.981946969 +0000 UTC m=+8.624092766" observedRunningTime="2026-01-24 00:57:43.622165009 +0000 UTC m=+9.264310815" watchObservedRunningTime="2026-01-24 00:57:45.204011857 +0000 UTC m=+10.846157654" Jan 24 00:57:45.599113 kubelet[2664]: E0124 00:57:45.598234 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:47.327516 kubelet[2664]: E0124 00:57:47.327469 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:49.005583 sudo[1780]: pam_unix(sudo:session): session closed for user root Jan 24 00:57:49.009719 sshd[1773]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:49.014810 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:50030.service: Deactivated successfully. Jan 24 00:57:49.019909 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:57:49.021756 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:57:49.024942 systemd-logind[1562]: Removed session 7. Jan 24 00:57:50.249335 update_engine[1563]: I20260124 00:57:50.246357 1563 update_attempter.cc:509] Updating boot flags... Jan 24 00:57:50.348392 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3075) Jan 24 00:57:50.471170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3073) Jan 24 00:57:50.556918 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3073) Jan 24 00:57:54.125205 kubelet[2664]: I0124 00:57:54.125006 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xtmm\" (UniqueName: \"kubernetes.io/projected/f990cf9f-b893-4a13-b6af-ab7bf97711a7-kube-api-access-2xtmm\") pod \"calico-typha-5f74fbf978-trlbk\" (UID: \"f990cf9f-b893-4a13-b6af-ab7bf97711a7\") " pod="calico-system/calico-typha-5f74fbf978-trlbk" Jan 24 00:57:54.125205 kubelet[2664]: I0124 00:57:54.125107 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f990cf9f-b893-4a13-b6af-ab7bf97711a7-typha-certs\") pod \"calico-typha-5f74fbf978-trlbk\" (UID: \"f990cf9f-b893-4a13-b6af-ab7bf97711a7\") " pod="calico-system/calico-typha-5f74fbf978-trlbk" Jan 24 00:57:54.125205 kubelet[2664]: I0124 00:57:54.125131 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f990cf9f-b893-4a13-b6af-ab7bf97711a7-tigera-ca-bundle\") pod \"calico-typha-5f74fbf978-trlbk\" (UID: \"f990cf9f-b893-4a13-b6af-ab7bf97711a7\") " pod="calico-system/calico-typha-5f74fbf978-trlbk" Jan 24 00:57:54.283749 kubelet[2664]: E0124 00:57:54.283454 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:54.284137 containerd[1593]: time="2026-01-24T00:57:54.284092870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f74fbf978-trlbk,Uid:f990cf9f-b893-4a13-b6af-ab7bf97711a7,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:54.328393 kubelet[2664]: I0124 00:57:54.325965 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-lib-modules\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328393 kubelet[2664]: I0124 00:57:54.326039 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-var-run-calico\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328393 kubelet[2664]: I0124 00:57:54.326060 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-policysync\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328393 kubelet[2664]: I0124 00:57:54.326073 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-flexvol-driver-host\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328393 kubelet[2664]: I0124 00:57:54.326092 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-cni-log-dir\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328623 kubelet[2664]: I0124 00:57:54.326105 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-cni-net-dir\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328623 kubelet[2664]: I0124 00:57:54.326118 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-var-lib-calico\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328623 kubelet[2664]: I0124 00:57:54.326132 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9p7f\" (UniqueName: \"kubernetes.io/projected/26751731-34d4-45d1-9483-28c897e63fdf-kube-api-access-m9p7f\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328623 kubelet[2664]: I0124 00:57:54.326147 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-xtables-lock\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328623 kubelet[2664]: I0124 00:57:54.326164 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26751731-34d4-45d1-9483-28c897e63fdf-cni-bin-dir\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328736 kubelet[2664]: I0124 00:57:54.326177 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26751731-34d4-45d1-9483-28c897e63fdf-node-certs\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.328736 kubelet[2664]: I0124 00:57:54.326190 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26751731-34d4-45d1-9483-28c897e63fdf-tigera-ca-bundle\") pod \"calico-node-6qxq4\" (UID: \"26751731-34d4-45d1-9483-28c897e63fdf\") " pod="calico-system/calico-node-6qxq4" Jan 24 00:57:54.332029 containerd[1593]: time="2026-01-24T00:57:54.331514661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:54.332029 containerd[1593]: time="2026-01-24T00:57:54.331657117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:54.334749 containerd[1593]: time="2026-01-24T00:57:54.334511062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:54.336871 containerd[1593]: time="2026-01-24T00:57:54.335404662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:54.357127 kubelet[2664]: E0124 00:57:54.357038 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:57:54.445348 kubelet[2664]: E0124 00:57:54.443008 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.445348 kubelet[2664]: W0124 00:57:54.443150 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.445348 kubelet[2664]: E0124 00:57:54.443399 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.449335 kubelet[2664]: E0124 00:57:54.446751 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.449335 kubelet[2664]: W0124 00:57:54.447020 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.449335 kubelet[2664]: E0124 00:57:54.447039 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.456689 kubelet[2664]: E0124 00:57:54.452677 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.456689 kubelet[2664]: W0124 00:57:54.452818 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.456689 kubelet[2664]: E0124 00:57:54.453175 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.456689 kubelet[2664]: E0124 00:57:54.453773 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.456689 kubelet[2664]: W0124 00:57:54.453795 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.456689 kubelet[2664]: E0124 00:57:54.453826 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.456689 kubelet[2664]: E0124 00:57:54.454348 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.456689 kubelet[2664]: W0124 00:57:54.454360 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.456689 kubelet[2664]: E0124 00:57:54.454373 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.457064 kubelet[2664]: E0124 00:57:54.456941 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.457064 kubelet[2664]: W0124 00:57:54.456953 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.457064 kubelet[2664]: E0124 00:57:54.456966 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.460409 kubelet[2664]: E0124 00:57:54.459629 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.460409 kubelet[2664]: W0124 00:57:54.459652 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.460409 kubelet[2664]: E0124 00:57:54.459670 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.463068 kubelet[2664]: E0124 00:57:54.462932 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.463068 kubelet[2664]: W0124 00:57:54.462959 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.463068 kubelet[2664]: E0124 00:57:54.462985 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.465106 kubelet[2664]: E0124 00:57:54.464845 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.465106 kubelet[2664]: W0124 00:57:54.464913 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.465106 kubelet[2664]: E0124 00:57:54.464931 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.466484 kubelet[2664]: E0124 00:57:54.466388 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.466755 kubelet[2664]: W0124 00:57:54.466618 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.466755 kubelet[2664]: E0124 00:57:54.466639 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.470338 kubelet[2664]: E0124 00:57:54.470090 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.470338 kubelet[2664]: W0124 00:57:54.470103 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.470338 kubelet[2664]: E0124 00:57:54.470117 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.472006 kubelet[2664]: E0124 00:57:54.471766 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.472006 kubelet[2664]: W0124 00:57:54.471778 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.472006 kubelet[2664]: E0124 00:57:54.471794 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.474194 kubelet[2664]: E0124 00:57:54.474044 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.474568 kubelet[2664]: W0124 00:57:54.474543 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.475172 kubelet[2664]: E0124 00:57:54.475029 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.476233 kubelet[2664]: E0124 00:57:54.476009 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.476233 kubelet[2664]: W0124 00:57:54.476022 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.476233 kubelet[2664]: E0124 00:57:54.476168 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.477032 kubelet[2664]: E0124 00:57:54.477017 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.477141 kubelet[2664]: W0124 00:57:54.477106 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.477141 kubelet[2664]: E0124 00:57:54.477123 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.479703 kubelet[2664]: E0124 00:57:54.479374 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:54.483338 kubelet[2664]: E0124 00:57:54.482729 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.483338 kubelet[2664]: W0124 00:57:54.482746 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.483338 kubelet[2664]: E0124 00:57:54.482765 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.484091 kubelet[2664]: E0124 00:57:54.484013 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.484091 kubelet[2664]: W0124 00:57:54.484027 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.484091 kubelet[2664]: E0124 00:57:54.484044 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.484699 kubelet[2664]: E0124 00:57:54.484686 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.484817 kubelet[2664]: W0124 00:57:54.484755 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.484817 kubelet[2664]: E0124 00:57:54.484771 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.485928 containerd[1593]: time="2026-01-24T00:57:54.485507588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qxq4,Uid:26751731-34d4-45d1-9483-28c897e63fdf,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:54.486100 kubelet[2664]: E0124 00:57:54.486088 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.486218 kubelet[2664]: W0124 00:57:54.486136 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.486218 kubelet[2664]: E0124 00:57:54.486150 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.486764 kubelet[2664]: E0124 00:57:54.486752 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.486828 kubelet[2664]: W0124 00:57:54.486818 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.487010 kubelet[2664]: E0124 00:57:54.486959 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.487441 kubelet[2664]: E0124 00:57:54.487223 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.487441 kubelet[2664]: W0124 00:57:54.487234 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.487441 kubelet[2664]: E0124 00:57:54.487370 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.487957 kubelet[2664]: E0124 00:57:54.487833 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.487957 kubelet[2664]: W0124 00:57:54.487894 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.487957 kubelet[2664]: E0124 00:57:54.487915 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.488531 kubelet[2664]: E0124 00:57:54.488512 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.488531 kubelet[2664]: W0124 00:57:54.488526 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.489017 kubelet[2664]: E0124 00:57:54.488538 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.494385 containerd[1593]: time="2026-01-24T00:57:54.494143657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f74fbf978-trlbk,Uid:f990cf9f-b893-4a13-b6af-ab7bf97711a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe3c9571f0c819a8788671964aaa6fd22353ae728c98340b059757da10308516\"" Jan 24 00:57:54.495714 kubelet[2664]: E0124 00:57:54.495511 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:54.498349 containerd[1593]: time="2026-01-24T00:57:54.497995537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:57:54.529776 kubelet[2664]: E0124 00:57:54.529368 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.529776 kubelet[2664]: W0124 00:57:54.529385 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.529776 kubelet[2664]: E0124 00:57:54.529499 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.529776 kubelet[2664]: I0124 00:57:54.529669 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b7b68612-f671-4faf-9c72-eb6b0593666c-varrun\") pod \"csi-node-driver-7gdj7\" (UID: \"b7b68612-f671-4faf-9c72-eb6b0593666c\") " pod="calico-system/csi-node-driver-7gdj7" Jan 24 00:57:54.530615 kubelet[2664]: E0124 00:57:54.530508 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.530615 kubelet[2664]: W0124 00:57:54.530553 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.530615 kubelet[2664]: E0124 00:57:54.530569 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.530615 kubelet[2664]: I0124 00:57:54.530584 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7b68612-f671-4faf-9c72-eb6b0593666c-registration-dir\") pod \"csi-node-driver-7gdj7\" (UID: \"b7b68612-f671-4faf-9c72-eb6b0593666c\") " pod="calico-system/csi-node-driver-7gdj7" Jan 24 00:57:54.533108 kubelet[2664]: E0124 00:57:54.532950 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.533108 kubelet[2664]: W0124 00:57:54.532993 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.533108 kubelet[2664]: E0124 00:57:54.533010 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.533108 kubelet[2664]: I0124 00:57:54.533027 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7b68612-f671-4faf-9c72-eb6b0593666c-socket-dir\") pod \"csi-node-driver-7gdj7\" (UID: \"b7b68612-f671-4faf-9c72-eb6b0593666c\") " pod="calico-system/csi-node-driver-7gdj7" Jan 24 00:57:54.533778 kubelet[2664]: E0124 00:57:54.533652 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.533778 kubelet[2664]: W0124 00:57:54.533663 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.533837 kubelet[2664]: E0124 00:57:54.533799 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.533837 kubelet[2664]: I0124 00:57:54.533818 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7b68612-f671-4faf-9c72-eb6b0593666c-kubelet-dir\") pod \"csi-node-driver-7gdj7\" (UID: \"b7b68612-f671-4faf-9c72-eb6b0593666c\") " pod="calico-system/csi-node-driver-7gdj7" Jan 24 00:57:54.534094 containerd[1593]: time="2026-01-24T00:57:54.532446108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:54.534094 containerd[1593]: time="2026-01-24T00:57:54.532525002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:54.534094 containerd[1593]: time="2026-01-24T00:57:54.532672069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:54.534411 kubelet[2664]: E0124 00:57:54.534346 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.534411 kubelet[2664]: W0124 00:57:54.534356 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.534531 containerd[1593]: time="2026-01-24T00:57:54.534385283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:54.534561 kubelet[2664]: E0124 00:57:54.534491 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.535104 kubelet[2664]: E0124 00:57:54.535060 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.535104 kubelet[2664]: W0124 00:57:54.535071 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.535478 kubelet[2664]: E0124 00:57:54.535207 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.536843 kubelet[2664]: E0124 00:57:54.536687 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.536843 kubelet[2664]: W0124 00:57:54.536730 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.537344 kubelet[2664]: E0124 00:57:54.537084 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.537961 kubelet[2664]: E0124 00:57:54.537802 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.538040 kubelet[2664]: W0124 00:57:54.537997 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.538357 kubelet[2664]: E0124 00:57:54.538173 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.538357 kubelet[2664]: I0124 00:57:54.538225 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwx2z\" (UniqueName: \"kubernetes.io/projected/b7b68612-f671-4faf-9c72-eb6b0593666c-kube-api-access-lwx2z\") pod \"csi-node-driver-7gdj7\" (UID: \"b7b68612-f671-4faf-9c72-eb6b0593666c\") " pod="calico-system/csi-node-driver-7gdj7" Jan 24 00:57:54.538764 kubelet[2664]: E0124 00:57:54.538742 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.538764 kubelet[2664]: W0124 00:57:54.538752 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.539094 kubelet[2664]: E0124 00:57:54.539070 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.539479 kubelet[2664]: E0124 00:57:54.539325 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.539479 kubelet[2664]: W0124 00:57:54.539366 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.539531 kubelet[2664]: E0124 00:57:54.539481 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.540387 kubelet[2664]: E0124 00:57:54.540317 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.540387 kubelet[2664]: W0124 00:57:54.540358 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.540455 kubelet[2664]: E0124 00:57:54.540409 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.541120 kubelet[2664]: E0124 00:57:54.540964 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.541120 kubelet[2664]: W0124 00:57:54.541008 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.541120 kubelet[2664]: E0124 00:57:54.541018 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.542005 kubelet[2664]: E0124 00:57:54.541943 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.542005 kubelet[2664]: W0124 00:57:54.541987 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.542005 kubelet[2664]: E0124 00:57:54.541997 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.542365 kubelet[2664]: E0124 00:57:54.542345 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.542365 kubelet[2664]: W0124 00:57:54.542356 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.542365 kubelet[2664]: E0124 00:57:54.542365 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.543058 kubelet[2664]: E0124 00:57:54.543044 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.543058 kubelet[2664]: W0124 00:57:54.543055 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.543120 kubelet[2664]: E0124 00:57:54.543064 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.615842 containerd[1593]: time="2026-01-24T00:57:54.615684522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qxq4,Uid:26751731-34d4-45d1-9483-28c897e63fdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d0ee328868e6aee346dfbe03cb599e9e1787088cbc0cde971c3f8d8ce471275\"" Jan 24 00:57:54.616640 kubelet[2664]: E0124 00:57:54.616499 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:54.644482 kubelet[2664]: E0124 00:57:54.644434 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.644482 kubelet[2664]: W0124 00:57:54.644484 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.644573 kubelet[2664]: E0124 00:57:54.644502 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.645146 kubelet[2664]: E0124 00:57:54.645104 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.645188 kubelet[2664]: W0124 00:57:54.645149 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.645215 kubelet[2664]: E0124 00:57:54.645200 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.645956 kubelet[2664]: E0124 00:57:54.645791 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.645956 kubelet[2664]: W0124 00:57:54.645838 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.645956 kubelet[2664]: E0124 00:57:54.645887 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.646403 kubelet[2664]: E0124 00:57:54.646359 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.646403 kubelet[2664]: W0124 00:57:54.646404 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.646493 kubelet[2664]: E0124 00:57:54.646475 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.646985 kubelet[2664]: E0124 00:57:54.646880 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.646985 kubelet[2664]: W0124 00:57:54.646960 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.647180 kubelet[2664]: E0124 00:57:54.647130 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.647509 kubelet[2664]: E0124 00:57:54.647468 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.647541 kubelet[2664]: W0124 00:57:54.647511 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.647743 kubelet[2664]: E0124 00:57:54.647672 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.648616 kubelet[2664]: E0124 00:57:54.648574 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.648658 kubelet[2664]: W0124 00:57:54.648617 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.648783 kubelet[2664]: E0124 00:57:54.648739 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.649383 kubelet[2664]: E0124 00:57:54.649361 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.649383 kubelet[2664]: W0124 00:57:54.649376 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.649598 kubelet[2664]: E0124 00:57:54.649465 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.649990 kubelet[2664]: E0124 00:57:54.649972 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.649990 kubelet[2664]: W0124 00:57:54.649984 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.650215 kubelet[2664]: E0124 00:57:54.650126 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.650514 kubelet[2664]: E0124 00:57:54.650465 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.650514 kubelet[2664]: W0124 00:57:54.650508 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.650667 kubelet[2664]: E0124 00:57:54.650617 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.651041 kubelet[2664]: E0124 00:57:54.650977 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.651041 kubelet[2664]: W0124 00:57:54.651022 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.651141 kubelet[2664]: E0124 00:57:54.651123 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.651558 kubelet[2664]: E0124 00:57:54.651517 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.651592 kubelet[2664]: W0124 00:57:54.651559 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.651722 kubelet[2664]: E0124 00:57:54.651676 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.652086 kubelet[2664]: E0124 00:57:54.652021 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.652086 kubelet[2664]: W0124 00:57:54.652068 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.652397 kubelet[2664]: E0124 00:57:54.652204 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.652676 kubelet[2664]: E0124 00:57:54.652608 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.652676 kubelet[2664]: W0124 00:57:54.652656 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.652828 kubelet[2664]: E0124 00:57:54.652728 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.653118 kubelet[2664]: E0124 00:57:54.653077 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.653118 kubelet[2664]: W0124 00:57:54.653118 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.653225 kubelet[2664]: E0124 00:57:54.653212 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.653717 kubelet[2664]: E0124 00:57:54.653653 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.653717 kubelet[2664]: W0124 00:57:54.653700 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.653782 kubelet[2664]: E0124 00:57:54.653768 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.654183 kubelet[2664]: E0124 00:57:54.654126 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.654183 kubelet[2664]: W0124 00:57:54.654172 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.654354 kubelet[2664]: E0124 00:57:54.654332 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.654776 kubelet[2664]: E0124 00:57:54.654708 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.654776 kubelet[2664]: W0124 00:57:54.654756 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.654881 kubelet[2664]: E0124 00:57:54.654835 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.655436 kubelet[2664]: E0124 00:57:54.655354 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.655436 kubelet[2664]: W0124 00:57:54.655405 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.655574 kubelet[2664]: E0124 00:57:54.655520 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.655843 kubelet[2664]: E0124 00:57:54.655780 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.655843 kubelet[2664]: W0124 00:57:54.655826 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.656055 kubelet[2664]: E0124 00:57:54.656011 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.656458 kubelet[2664]: E0124 00:57:54.656415 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.656458 kubelet[2664]: W0124 00:57:54.656458 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.656582 kubelet[2664]: E0124 00:57:54.656545 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.657498 kubelet[2664]: E0124 00:57:54.657161 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.657498 kubelet[2664]: W0124 00:57:54.657171 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.657498 kubelet[2664]: E0124 00:57:54.657404 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.657874 kubelet[2664]: E0124 00:57:54.657729 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.657874 kubelet[2664]: W0124 00:57:54.657740 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.657874 kubelet[2664]: E0124 00:57:54.657818 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.658667 kubelet[2664]: E0124 00:57:54.658595 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.658667 kubelet[2664]: W0124 00:57:54.658656 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.658913 kubelet[2664]: E0124 00:57:54.658850 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.660038 kubelet[2664]: E0124 00:57:54.659914 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.660038 kubelet[2664]: W0124 00:57:54.660008 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.660038 kubelet[2664]: E0124 00:57:54.660018 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:54.674772 kubelet[2664]: E0124 00:57:54.674558 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:54.674772 kubelet[2664]: W0124 00:57:54.674609 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:54.674772 kubelet[2664]: E0124 00:57:54.674621 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:55.814880 containerd[1593]: time="2026-01-24T00:57:55.814752545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:55.816062 containerd[1593]: time="2026-01-24T00:57:55.815999559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:57:55.817713 containerd[1593]: time="2026-01-24T00:57:55.817633924Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:55.820682 containerd[1593]: time="2026-01-24T00:57:55.820486781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:55.821193 containerd[1593]: time="2026-01-24T00:57:55.821128833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.323079156s" Jan 24 00:57:55.821193 containerd[1593]: time="2026-01-24T00:57:55.821191901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:57:55.830774 containerd[1593]: time="2026-01-24T00:57:55.830712731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:57:55.858135 containerd[1593]: time="2026-01-24T00:57:55.858030449Z" level=info msg="CreateContainer within sandbox \"fe3c9571f0c819a8788671964aaa6fd22353ae728c98340b059757da10308516\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:57:55.883081 containerd[1593]: time="2026-01-24T00:57:55.882962911Z" level=info msg="CreateContainer within sandbox \"fe3c9571f0c819a8788671964aaa6fd22353ae728c98340b059757da10308516\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d3ab189c6039d8bd82dbc941c53b3c64fe1ea29a1cca755fb78c68a9a805c9b6\"" Jan 24 00:57:55.887852 containerd[1593]: time="2026-01-24T00:57:55.887770449Z" level=info msg="StartContainer for \"d3ab189c6039d8bd82dbc941c53b3c64fe1ea29a1cca755fb78c68a9a805c9b6\"" Jan 24 00:57:56.002995 containerd[1593]: time="2026-01-24T00:57:56.002883626Z" level=info msg="StartContainer for \"d3ab189c6039d8bd82dbc941c53b3c64fe1ea29a1cca755fb78c68a9a805c9b6\" returns successfully" Jan 24 00:57:56.403331 containerd[1593]: time="2026-01-24T00:57:56.403081676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:56.404435 containerd[1593]: time="2026-01-24T00:57:56.404363812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:57:56.406424 containerd[1593]: time="2026-01-24T00:57:56.406176690Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:56.409753 containerd[1593]: time="2026-01-24T00:57:56.409621828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:56.410439 containerd[1593]: time="2026-01-24T00:57:56.410196046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 579.412429ms" Jan 24 00:57:56.410439 containerd[1593]: time="2026-01-24T00:57:56.410360947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:57:56.421567 containerd[1593]: time="2026-01-24T00:57:56.421173889Z" level=info msg="CreateContainer within sandbox \"2d0ee328868e6aee346dfbe03cb599e9e1787088cbc0cde971c3f8d8ce471275\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:57:56.441918 containerd[1593]: time="2026-01-24T00:57:56.441829548Z" level=info msg="CreateContainer within sandbox \"2d0ee328868e6aee346dfbe03cb599e9e1787088cbc0cde971c3f8d8ce471275\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c3da295c438b6a8e931e6f9c7f147d4d91f291eeca25576e5f78b99dfd9fd8cb\"" Jan 24 00:57:56.442643 containerd[1593]: time="2026-01-24T00:57:56.442613841Z" level=info msg="StartContainer for \"c3da295c438b6a8e931e6f9c7f147d4d91f291eeca25576e5f78b99dfd9fd8cb\"" Jan 24 00:57:56.533976 kubelet[2664]: E0124 00:57:56.533636 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:57:56.539818 containerd[1593]: time="2026-01-24T00:57:56.539322119Z" level=info msg="StartContainer for \"c3da295c438b6a8e931e6f9c7f147d4d91f291eeca25576e5f78b99dfd9fd8cb\" returns successfully" Jan 24 00:57:56.628806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3da295c438b6a8e931e6f9c7f147d4d91f291eeca25576e5f78b99dfd9fd8cb-rootfs.mount: Deactivated successfully. Jan 24 00:57:56.701650 containerd[1593]: time="2026-01-24T00:57:56.700035508Z" level=info msg="shim disconnected" id=c3da295c438b6a8e931e6f9c7f147d4d91f291eeca25576e5f78b99dfd9fd8cb namespace=k8s.io Jan 24 00:57:56.701823 containerd[1593]: time="2026-01-24T00:57:56.701600966Z" level=warning msg="cleaning up after shim disconnected" id=c3da295c438b6a8e931e6f9c7f147d4d91f291eeca25576e5f78b99dfd9fd8cb namespace=k8s.io Jan 24 00:57:56.701823 containerd[1593]: time="2026-01-24T00:57:56.701669416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:57:56.707448 kubelet[2664]: E0124 00:57:56.707013 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:56.723515 kubelet[2664]: E0124 00:57:56.723446 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:57.718363 kubelet[2664]: I0124 00:57:57.717826 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:57:57.718363 kubelet[2664]: E0124 00:57:57.718234 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:57.718806 kubelet[2664]: E0124 00:57:57.718565 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:57.720676 containerd[1593]: time="2026-01-24T00:57:57.720609634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:57:57.737016 kubelet[2664]: I0124 00:57:57.736531 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f74fbf978-trlbk" podStartSLOduration=3.40328926 podStartE2EDuration="4.736509791s" podCreationTimestamp="2026-01-24 00:57:53 +0000 UTC" firstStartedPulling="2026-01-24 00:57:54.497003385 +0000 UTC m=+20.139149180" lastFinishedPulling="2026-01-24 00:57:55.830223915 +0000 UTC m=+21.472369711" observedRunningTime="2026-01-24 00:57:56.757071579 +0000 UTC m=+22.399217375" watchObservedRunningTime="2026-01-24 00:57:57.736509791 +0000 UTC m=+23.378655588" Jan 24 00:57:58.524027 kubelet[2664]: E0124 00:57:58.523631 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:00.524167 kubelet[2664]: E0124 00:58:00.523844 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:00.756012 containerd[1593]: time="2026-01-24T00:58:00.755907109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:58:00.757398 containerd[1593]: time="2026-01-24T00:58:00.757189516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:58:00.758799 containerd[1593]: time="2026-01-24T00:58:00.758737651Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:58:00.762713 containerd[1593]: time="2026-01-24T00:58:00.762642624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:58:00.763672 containerd[1593]: time="2026-01-24T00:58:00.763586800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.042942942s" Jan 24 00:58:00.763672 containerd[1593]: time="2026-01-24T00:58:00.763658072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:58:00.766650 containerd[1593]: time="2026-01-24T00:58:00.766546394Z" level=info msg="CreateContainer within sandbox \"2d0ee328868e6aee346dfbe03cb599e9e1787088cbc0cde971c3f8d8ce471275\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:58:00.789983 containerd[1593]: time="2026-01-24T00:58:00.789748185Z" level=info msg="CreateContainer within sandbox \"2d0ee328868e6aee346dfbe03cb599e9e1787088cbc0cde971c3f8d8ce471275\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f123a104d823b3b8e57767fc0d7b6c009906be9790212f883d00065355bd37ef\"" Jan 24 00:58:00.791233 containerd[1593]: time="2026-01-24T00:58:00.791156676Z" level=info msg="StartContainer for \"f123a104d823b3b8e57767fc0d7b6c009906be9790212f883d00065355bd37ef\"" Jan 24 00:58:00.922837 containerd[1593]: time="2026-01-24T00:58:00.922780908Z" level=info msg="StartContainer for \"f123a104d823b3b8e57767fc0d7b6c009906be9790212f883d00065355bd37ef\" returns successfully" Jan 24 00:58:01.739968 kubelet[2664]: E0124 00:58:01.739690 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:01.770482 containerd[1593]: time="2026-01-24T00:58:01.770236823Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:58:01.801922 kubelet[2664]: I0124 00:58:01.801694 2664 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:58:01.821557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f123a104d823b3b8e57767fc0d7b6c009906be9790212f883d00065355bd37ef-rootfs.mount: Deactivated successfully. Jan 24 00:58:01.826364 containerd[1593]: time="2026-01-24T00:58:01.823841955Z" level=info msg="shim disconnected" id=f123a104d823b3b8e57767fc0d7b6c009906be9790212f883d00065355bd37ef namespace=k8s.io Jan 24 00:58:01.826364 containerd[1593]: time="2026-01-24T00:58:01.823905271Z" level=warning msg="cleaning up after shim disconnected" id=f123a104d823b3b8e57767fc0d7b6c009906be9790212f883d00065355bd37ef namespace=k8s.io Jan 24 00:58:01.826364 containerd[1593]: time="2026-01-24T00:58:01.823920342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:58:01.919338 kubelet[2664]: I0124 00:58:01.919019 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3989f243-b175-4180-9b60-b4d8b86d76d7-config-volume\") pod \"coredns-668d6bf9bc-s9c2b\" (UID: \"3989f243-b175-4180-9b60-b4d8b86d76d7\") " pod="kube-system/coredns-668d6bf9bc-s9c2b" Jan 24 00:58:01.919338 kubelet[2664]: I0124 00:58:01.919088 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-ca-bundle\") pod \"whisker-9db55f7d4-l9kdm\" (UID: \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\") " pod="calico-system/whisker-9db55f7d4-l9kdm" Jan 24 00:58:01.919338 kubelet[2664]: I0124 00:58:01.919224 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22wd\" (UniqueName: \"kubernetes.io/projected/3989f243-b175-4180-9b60-b4d8b86d76d7-kube-api-access-n22wd\") pod \"coredns-668d6bf9bc-s9c2b\" (UID: \"3989f243-b175-4180-9b60-b4d8b86d76d7\") " pod="kube-system/coredns-668d6bf9bc-s9c2b" Jan 24 00:58:01.919921 kubelet[2664]: I0124 00:58:01.919827 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxhfb\" (UniqueName: \"kubernetes.io/projected/a40b7ad0-87c5-48cf-aae6-708b12427df9-kube-api-access-cxhfb\") pod \"coredns-668d6bf9bc-4d65b\" (UID: \"a40b7ad0-87c5-48cf-aae6-708b12427df9\") " pod="kube-system/coredns-668d6bf9bc-4d65b" Jan 24 00:58:01.919921 kubelet[2664]: I0124 00:58:01.919928 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-backend-key-pair\") pod \"whisker-9db55f7d4-l9kdm\" (UID: \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\") " pod="calico-system/whisker-9db55f7d4-l9kdm" Jan 24 00:58:01.919921 kubelet[2664]: I0124 00:58:01.919957 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f4fb867-3038-43f2-9206-1156c12b1931-calico-apiserver-certs\") pod \"calico-apiserver-89796fd66-7lkh6\" (UID: \"0f4fb867-3038-43f2-9206-1156c12b1931\") " pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" Jan 24 00:58:01.919921 kubelet[2664]: I0124 00:58:01.919981 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5dh4\" (UniqueName: \"kubernetes.io/projected/0f4fb867-3038-43f2-9206-1156c12b1931-kube-api-access-f5dh4\") pod \"calico-apiserver-89796fd66-7lkh6\" (UID: \"0f4fb867-3038-43f2-9206-1156c12b1931\") " pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" Jan 24 00:58:01.919921 kubelet[2664]: I0124 00:58:01.920007 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfx7w\" (UniqueName: \"kubernetes.io/projected/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-kube-api-access-jfx7w\") pod \"whisker-9db55f7d4-l9kdm\" (UID: \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\") " pod="calico-system/whisker-9db55f7d4-l9kdm" Jan 24 00:58:01.920527 kubelet[2664]: I0124 00:58:01.920031 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40b7ad0-87c5-48cf-aae6-708b12427df9-config-volume\") pod \"coredns-668d6bf9bc-4d65b\" (UID: \"a40b7ad0-87c5-48cf-aae6-708b12427df9\") " pod="kube-system/coredns-668d6bf9bc-4d65b" Jan 24 00:58:02.022371 kubelet[2664]: I0124 00:58:02.021555 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw8pc\" (UniqueName: \"kubernetes.io/projected/0b98a207-ae40-4df7-81ed-b24949ca269a-kube-api-access-nw8pc\") pod \"calico-apiserver-89796fd66-rbmf5\" (UID: \"0b98a207-ae40-4df7-81ed-b24949ca269a\") " pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" Jan 24 00:58:02.022371 kubelet[2664]: I0124 00:58:02.021873 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eb625f4a-3376-469f-90ac-91f293666e81-goldmane-key-pair\") pod \"goldmane-666569f655-wrk6w\" (UID: \"eb625f4a-3376-469f-90ac-91f293666e81\") " pod="calico-system/goldmane-666569f655-wrk6w" Jan 24 00:58:02.022371 kubelet[2664]: I0124 00:58:02.021921 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hntzw\" (UniqueName: \"kubernetes.io/projected/3c0ff763-a567-4414-bf09-8f7990c6e756-kube-api-access-hntzw\") pod \"calico-kube-controllers-5bb8c95968-8n99c\" (UID: \"3c0ff763-a567-4414-bf09-8f7990c6e756\") " pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" Jan 24 00:58:02.022371 kubelet[2664]: I0124 00:58:02.021973 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb625f4a-3376-469f-90ac-91f293666e81-goldmane-ca-bundle\") pod \"goldmane-666569f655-wrk6w\" (UID: \"eb625f4a-3376-469f-90ac-91f293666e81\") " pod="calico-system/goldmane-666569f655-wrk6w" Jan 24 00:58:02.022371 kubelet[2664]: I0124 00:58:02.022020 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c0ff763-a567-4414-bf09-8f7990c6e756-tigera-ca-bundle\") pod \"calico-kube-controllers-5bb8c95968-8n99c\" (UID: \"3c0ff763-a567-4414-bf09-8f7990c6e756\") " pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" Jan 24 00:58:02.022641 kubelet[2664]: I0124 00:58:02.022048 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb625f4a-3376-469f-90ac-91f293666e81-config\") pod \"goldmane-666569f655-wrk6w\" (UID: \"eb625f4a-3376-469f-90ac-91f293666e81\") " pod="calico-system/goldmane-666569f655-wrk6w" Jan 24 00:58:02.022641 kubelet[2664]: I0124 00:58:02.022117 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b98a207-ae40-4df7-81ed-b24949ca269a-calico-apiserver-certs\") pod \"calico-apiserver-89796fd66-rbmf5\" (UID: \"0b98a207-ae40-4df7-81ed-b24949ca269a\") " pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" Jan 24 00:58:02.022641 kubelet[2664]: I0124 00:58:02.022141 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hs5w\" (UniqueName: \"kubernetes.io/projected/eb625f4a-3376-469f-90ac-91f293666e81-kube-api-access-6hs5w\") pod \"goldmane-666569f655-wrk6w\" (UID: \"eb625f4a-3376-469f-90ac-91f293666e81\") " pod="calico-system/goldmane-666569f655-wrk6w" Jan 24 00:58:02.141406 kubelet[2664]: E0124 00:58:02.141378 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:02.143763 containerd[1593]: time="2026-01-24T00:58:02.143621672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4d65b,Uid:a40b7ad0-87c5-48cf-aae6-708b12427df9,Namespace:kube-system,Attempt:0,}" Jan 24 00:58:02.164487 containerd[1593]: time="2026-01-24T00:58:02.164388122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-7lkh6,Uid:0f4fb867-3038-43f2-9206-1156c12b1931,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:58:02.179016 kubelet[2664]: E0124 00:58:02.178579 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:02.179948 containerd[1593]: time="2026-01-24T00:58:02.179639927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9c2b,Uid:3989f243-b175-4180-9b60-b4d8b86d76d7,Namespace:kube-system,Attempt:0,}" Jan 24 00:58:02.183746 containerd[1593]: time="2026-01-24T00:58:02.183561114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8c95968-8n99c,Uid:3c0ff763-a567-4414-bf09-8f7990c6e756,Namespace:calico-system,Attempt:0,}" Jan 24 00:58:02.187521 containerd[1593]: time="2026-01-24T00:58:02.187053251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrk6w,Uid:eb625f4a-3376-469f-90ac-91f293666e81,Namespace:calico-system,Attempt:0,}" Jan 24 00:58:02.201856 containerd[1593]: time="2026-01-24T00:58:02.201722647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9db55f7d4-l9kdm,Uid:d8d3e7e8-8bbd-4faf-8616-34b107036aa7,Namespace:calico-system,Attempt:0,}" Jan 24 00:58:02.206549 containerd[1593]: time="2026-01-24T00:58:02.206480076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-rbmf5,Uid:0b98a207-ae40-4df7-81ed-b24949ca269a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:58:02.410476 containerd[1593]: time="2026-01-24T00:58:02.409947062Z" level=error msg="Failed to destroy network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.410959 containerd[1593]: time="2026-01-24T00:58:02.410704763Z" level=error msg="Failed to destroy network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.419804 containerd[1593]: time="2026-01-24T00:58:02.419671397Z" level=error msg="Failed to destroy network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.420744 containerd[1593]: time="2026-01-24T00:58:02.420660518Z" level=error msg="Failed to destroy network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.424074 containerd[1593]: time="2026-01-24T00:58:02.422557969Z" level=error msg="encountered an error cleaning up failed sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.424074 containerd[1593]: time="2026-01-24T00:58:02.422666949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4d65b,Uid:a40b7ad0-87c5-48cf-aae6-708b12427df9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.428064 containerd[1593]: time="2026-01-24T00:58:02.427617002Z" level=error msg="encountered an error cleaning up failed sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.428064 containerd[1593]: time="2026-01-24T00:58:02.427743549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9c2b,Uid:3989f243-b175-4180-9b60-b4d8b86d76d7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.434849 containerd[1593]: time="2026-01-24T00:58:02.434767887Z" level=error msg="encountered an error cleaning up failed sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.434849 containerd[1593]: time="2026-01-24T00:58:02.434803863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrk6w,Uid:eb625f4a-3376-469f-90ac-91f293666e81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.443431 containerd[1593]: time="2026-01-24T00:58:02.441082875Z" level=error msg="Failed to destroy network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.443746 containerd[1593]: time="2026-01-24T00:58:02.443636763Z" level=error msg="encountered an error cleaning up failed sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.443746 containerd[1593]: time="2026-01-24T00:58:02.443714947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8c95968-8n99c,Uid:3c0ff763-a567-4414-bf09-8f7990c6e756,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.453893 containerd[1593]: time="2026-01-24T00:58:02.451960957Z" level=error msg="encountered an error cleaning up failed sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.453893 containerd[1593]: time="2026-01-24T00:58:02.452064887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-7lkh6,Uid:0f4fb867-3038-43f2-9206-1156c12b1931,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.468839 kubelet[2664]: E0124 00:58:02.468789 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.468954 containerd[1593]: time="2026-01-24T00:58:02.468840715Z" level=error msg="Failed to destroy network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.469462 containerd[1593]: time="2026-01-24T00:58:02.469430743Z" level=error msg="encountered an error cleaning up failed sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.469632 containerd[1593]: time="2026-01-24T00:58:02.469609761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9db55f7d4-l9kdm,Uid:d8d3e7e8-8bbd-4faf-8616-34b107036aa7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.469783 kubelet[2664]: E0124 00:58:02.469668 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" Jan 24 00:58:02.469783 kubelet[2664]: E0124 00:58:02.469763 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" Jan 24 00:58:02.469866 kubelet[2664]: E0124 00:58:02.469818 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-89796fd66-7lkh6_calico-apiserver(0f4fb867-3038-43f2-9206-1156c12b1931)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-89796fd66-7lkh6_calico-apiserver(0f4fb867-3038-43f2-9206-1156c12b1931)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:02.469866 kubelet[2664]: E0124 00:58:02.469458 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.469866 kubelet[2664]: E0124 00:58:02.469858 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" Jan 24 00:58:02.470025 kubelet[2664]: E0124 00:58:02.469876 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" Jan 24 00:58:02.470025 kubelet[2664]: E0124 00:58:02.469913 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bb8c95968-8n99c_calico-system(3c0ff763-a567-4414-bf09-8f7990c6e756)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bb8c95968-8n99c_calico-system(3c0ff763-a567-4414-bf09-8f7990c6e756)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:02.470025 kubelet[2664]: E0124 00:58:02.469208 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.470165 kubelet[2664]: E0124 00:58:02.469952 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s9c2b" Jan 24 00:58:02.470165 kubelet[2664]: E0124 00:58:02.469970 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s9c2b" Jan 24 00:58:02.470165 kubelet[2664]: E0124 00:58:02.470005 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s9c2b_kube-system(3989f243-b175-4180-9b60-b4d8b86d76d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s9c2b_kube-system(3989f243-b175-4180-9b60-b4d8b86d76d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s9c2b" podUID="3989f243-b175-4180-9b60-b4d8b86d76d7" Jan 24 00:58:02.470539 kubelet[2664]: E0124 00:58:02.469228 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.470539 kubelet[2664]: E0124 00:58:02.470037 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4d65b" Jan 24 00:58:02.470539 kubelet[2664]: E0124 00:58:02.470054 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4d65b" Jan 24 00:58:02.470610 kubelet[2664]: E0124 00:58:02.470082 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4d65b_kube-system(a40b7ad0-87c5-48cf-aae6-708b12427df9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4d65b_kube-system(a40b7ad0-87c5-48cf-aae6-708b12427df9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4d65b" podUID="a40b7ad0-87c5-48cf-aae6-708b12427df9" Jan 24 00:58:02.470610 kubelet[2664]: E0124 00:58:02.469439 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.470610 kubelet[2664]: E0124 00:58:02.470111 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wrk6w" Jan 24 00:58:02.470764 kubelet[2664]: E0124 00:58:02.470127 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wrk6w" Jan 24 00:58:02.470764 kubelet[2664]: E0124 00:58:02.470154 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wrk6w_calico-system(eb625f4a-3376-469f-90ac-91f293666e81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wrk6w_calico-system(eb625f4a-3376-469f-90ac-91f293666e81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:02.471219 kubelet[2664]: E0124 00:58:02.471178 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.471394 kubelet[2664]: E0124 00:58:02.471214 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9db55f7d4-l9kdm" Jan 24 00:58:02.471502 kubelet[2664]: E0124 00:58:02.471443 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9db55f7d4-l9kdm" Jan 24 00:58:02.471753 kubelet[2664]: E0124 00:58:02.471524 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9db55f7d4-l9kdm_calico-system(d8d3e7e8-8bbd-4faf-8616-34b107036aa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9db55f7d4-l9kdm_calico-system(d8d3e7e8-8bbd-4faf-8616-34b107036aa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9db55f7d4-l9kdm" podUID="d8d3e7e8-8bbd-4faf-8616-34b107036aa7" Jan 24 00:58:02.515100 containerd[1593]: time="2026-01-24T00:58:02.514841834Z" level=error msg="Failed to destroy network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.515642 containerd[1593]: time="2026-01-24T00:58:02.515564922Z" level=error msg="encountered an error cleaning up failed sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.515693 containerd[1593]: time="2026-01-24T00:58:02.515668562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-rbmf5,Uid:0b98a207-ae40-4df7-81ed-b24949ca269a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.516213 kubelet[2664]: E0124 00:58:02.516042 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.516213 kubelet[2664]: E0124 00:58:02.516125 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" Jan 24 00:58:02.516213 kubelet[2664]: E0124 00:58:02.516145 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" Jan 24 00:58:02.516517 kubelet[2664]: E0124 00:58:02.516232 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-89796fd66-rbmf5_calico-apiserver(0b98a207-ae40-4df7-81ed-b24949ca269a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-89796fd66-rbmf5_calico-apiserver(0b98a207-ae40-4df7-81ed-b24949ca269a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" podUID="0b98a207-ae40-4df7-81ed-b24949ca269a" Jan 24 00:58:02.530752 containerd[1593]: time="2026-01-24T00:58:02.530392545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gdj7,Uid:b7b68612-f671-4faf-9c72-eb6b0593666c,Namespace:calico-system,Attempt:0,}" Jan 24 00:58:02.754236 kubelet[2664]: E0124 00:58:02.752171 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:02.756493 containerd[1593]: time="2026-01-24T00:58:02.755972495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:58:02.760029 kubelet[2664]: I0124 00:58:02.759846 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:02.768121 kubelet[2664]: I0124 00:58:02.765029 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:02.768121 kubelet[2664]: I0124 00:58:02.766427 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:02.793091 kubelet[2664]: I0124 00:58:02.793065 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:02.818034 containerd[1593]: time="2026-01-24T00:58:02.817925166Z" level=info msg="StopPodSandbox for \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\"" Jan 24 00:58:02.818524 containerd[1593]: time="2026-01-24T00:58:02.818152587Z" level=info msg="Ensure that sandbox 7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b in task-service has been cleanup successfully" Jan 24 00:58:02.819893 containerd[1593]: time="2026-01-24T00:58:02.818636168Z" level=info msg="StopPodSandbox for \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\"" Jan 24 00:58:02.819893 containerd[1593]: time="2026-01-24T00:58:02.818798902Z" level=info msg="Ensure that sandbox 0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187 in task-service has been cleanup successfully" Jan 24 00:58:02.819893 containerd[1593]: time="2026-01-24T00:58:02.818959131Z" level=info msg="StopPodSandbox for \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\"" Jan 24 00:58:02.819893 containerd[1593]: time="2026-01-24T00:58:02.819197115Z" level=info msg="Ensure that sandbox 128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1 in task-service has been cleanup successfully" Jan 24 00:58:02.819893 containerd[1593]: time="2026-01-24T00:58:02.819470216Z" level=info msg="StopPodSandbox for \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\"" Jan 24 00:58:02.819893 containerd[1593]: time="2026-01-24T00:58:02.819607857Z" level=info msg="Ensure that sandbox e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b in task-service has been cleanup successfully" Jan 24 00:58:02.828544 kubelet[2664]: I0124 00:58:02.828425 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:02.830771 containerd[1593]: time="2026-01-24T00:58:02.830202510Z" level=info msg="StopPodSandbox for \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\"" Jan 24 00:58:02.830771 containerd[1593]: time="2026-01-24T00:58:02.830586902Z" level=info msg="Ensure that sandbox 3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a in task-service has been cleanup successfully" Jan 24 00:58:02.837683 kubelet[2664]: I0124 00:58:02.837612 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:02.840772 containerd[1593]: time="2026-01-24T00:58:02.840650530Z" level=info msg="StopPodSandbox for \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\"" Jan 24 00:58:02.845533 kubelet[2664]: I0124 00:58:02.845370 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:02.846215 containerd[1593]: time="2026-01-24T00:58:02.845824727Z" level=info msg="StopPodSandbox for \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\"" Jan 24 00:58:02.846215 containerd[1593]: time="2026-01-24T00:58:02.845943167Z" level=info msg="Ensure that sandbox 0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5 in task-service has been cleanup successfully" Jan 24 00:58:02.848788 containerd[1593]: time="2026-01-24T00:58:02.848717731Z" level=info msg="Ensure that sandbox b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb in task-service has been cleanup successfully" Jan 24 00:58:02.868465 containerd[1593]: time="2026-01-24T00:58:02.868013315Z" level=error msg="Failed to destroy network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.943140 containerd[1593]: time="2026-01-24T00:58:02.938044549Z" level=error msg="encountered an error cleaning up failed sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.943140 containerd[1593]: time="2026-01-24T00:58:02.938103283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gdj7,Uid:b7b68612-f671-4faf-9c72-eb6b0593666c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.938446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a-shm.mount: Deactivated successfully. Jan 24 00:58:02.945894 kubelet[2664]: E0124 00:58:02.945387 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.945894 kubelet[2664]: E0124 00:58:02.945482 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gdj7" Jan 24 00:58:02.945894 kubelet[2664]: E0124 00:58:02.945503 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gdj7" Jan 24 00:58:02.946379 kubelet[2664]: E0124 00:58:02.945535 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:02.958822 containerd[1593]: time="2026-01-24T00:58:02.958782410Z" level=error msg="StopPodSandbox for \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\" failed" error="failed to destroy network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.959577 containerd[1593]: time="2026-01-24T00:58:02.959118180Z" level=error msg="StopPodSandbox for \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\" failed" error="failed to destroy network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.960058 kubelet[2664]: E0124 00:58:02.960036 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:02.960489 kubelet[2664]: E0124 00:58:02.960390 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb"} Jan 24 00:58:02.960610 kubelet[2664]: E0124 00:58:02.960564 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3989f243-b175-4180-9b60-b4d8b86d76d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:02.960610 kubelet[2664]: E0124 00:58:02.960587 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3989f243-b175-4180-9b60-b4d8b86d76d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s9c2b" podUID="3989f243-b175-4180-9b60-b4d8b86d76d7" Jan 24 00:58:02.960937 kubelet[2664]: E0124 00:58:02.960151 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:02.960937 kubelet[2664]: E0124 00:58:02.960856 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1"} Jan 24 00:58:02.960937 kubelet[2664]: E0124 00:58:02.960875 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b98a207-ae40-4df7-81ed-b24949ca269a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:02.960937 kubelet[2664]: E0124 00:58:02.960907 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b98a207-ae40-4df7-81ed-b24949ca269a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" podUID="0b98a207-ae40-4df7-81ed-b24949ca269a" Jan 24 00:58:02.991545 containerd[1593]: time="2026-01-24T00:58:02.991207671Z" level=error msg="StopPodSandbox for \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\" failed" error="failed to destroy network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.991779 kubelet[2664]: E0124 00:58:02.991741 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:02.991968 kubelet[2664]: E0124 00:58:02.991873 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187"} Jan 24 00:58:02.991968 kubelet[2664]: E0124 00:58:02.991911 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c0ff763-a567-4414-bf09-8f7990c6e756\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:02.991968 kubelet[2664]: E0124 00:58:02.991938 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c0ff763-a567-4414-bf09-8f7990c6e756\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:02.992743 containerd[1593]: time="2026-01-24T00:58:02.992627472Z" level=error msg="StopPodSandbox for \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\" failed" error="failed to destroy network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.993044 kubelet[2664]: E0124 00:58:02.992923 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:02.993096 kubelet[2664]: E0124 00:58:02.993051 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a"} Jan 24 00:58:02.993133 kubelet[2664]: E0124 00:58:02.993104 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb625f4a-3376-469f-90ac-91f293666e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:02.993384 kubelet[2664]: E0124 00:58:02.993136 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb625f4a-3376-469f-90ac-91f293666e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:02.997813 containerd[1593]: time="2026-01-24T00:58:02.997705280Z" level=error msg="StopPodSandbox for \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\" failed" error="failed to destroy network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:02.998150 kubelet[2664]: E0124 00:58:02.998007 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:02.998150 kubelet[2664]: E0124 00:58:02.998049 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b"} Jan 24 00:58:02.998150 kubelet[2664]: E0124 00:58:02.998086 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f4fb867-3038-43f2-9206-1156c12b1931\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:02.998150 kubelet[2664]: E0124 00:58:02.998114 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f4fb867-3038-43f2-9206-1156c12b1931\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:03.006674 containerd[1593]: time="2026-01-24T00:58:03.006188897Z" level=error msg="StopPodSandbox for \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\" failed" error="failed to destroy network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:03.006790 kubelet[2664]: E0124 00:58:03.006581 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:03.009761 kubelet[2664]: E0124 00:58:03.006626 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b"} Jan 24 00:58:03.009761 kubelet[2664]: E0124 00:58:03.009752 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:03.009985 kubelet[2664]: E0124 00:58:03.009780 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9db55f7d4-l9kdm" podUID="d8d3e7e8-8bbd-4faf-8616-34b107036aa7" Jan 24 00:58:03.031224 containerd[1593]: time="2026-01-24T00:58:03.031016181Z" level=error msg="StopPodSandbox for \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\" failed" error="failed to destroy network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:03.031730 kubelet[2664]: E0124 00:58:03.031519 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:03.031730 kubelet[2664]: E0124 00:58:03.031630 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5"} Jan 24 00:58:03.031730 kubelet[2664]: E0124 00:58:03.031678 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a40b7ad0-87c5-48cf-aae6-708b12427df9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:03.031730 kubelet[2664]: E0124 00:58:03.031708 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a40b7ad0-87c5-48cf-aae6-708b12427df9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4d65b" podUID="a40b7ad0-87c5-48cf-aae6-708b12427df9" Jan 24 00:58:03.849821 kubelet[2664]: I0124 00:58:03.849432 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:03.850725 containerd[1593]: time="2026-01-24T00:58:03.850454113Z" level=info msg="StopPodSandbox for \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\"" Jan 24 00:58:03.851164 containerd[1593]: time="2026-01-24T00:58:03.850726566Z" level=info msg="Ensure that sandbox 400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a in task-service has been cleanup successfully" Jan 24 00:58:03.914337 containerd[1593]: time="2026-01-24T00:58:03.914035394Z" level=error msg="StopPodSandbox for \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\" failed" error="failed to destroy network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:58:03.917759 kubelet[2664]: E0124 00:58:03.917449 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:03.917951 kubelet[2664]: E0124 00:58:03.917756 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a"} Jan 24 00:58:03.917951 kubelet[2664]: E0124 00:58:03.917804 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7b68612-f671-4faf-9c72-eb6b0593666c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:58:03.917951 kubelet[2664]: E0124 00:58:03.917855 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7b68612-f671-4faf-9c72-eb6b0593666c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:08.673138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440364537.mount: Deactivated successfully. Jan 24 00:58:08.932482 containerd[1593]: time="2026-01-24T00:58:08.932039937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:58:08.934031 containerd[1593]: time="2026-01-24T00:58:08.933847286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:58:08.935734 containerd[1593]: time="2026-01-24T00:58:08.935651606Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:58:08.938521 containerd[1593]: time="2026-01-24T00:58:08.938440124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:58:08.939056 containerd[1593]: time="2026-01-24T00:58:08.938953750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.178256588s" Jan 24 00:58:08.939056 containerd[1593]: time="2026-01-24T00:58:08.939047112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:58:08.954031 containerd[1593]: time="2026-01-24T00:58:08.953942233Z" level=info msg="CreateContainer within sandbox \"2d0ee328868e6aee346dfbe03cb599e9e1787088cbc0cde971c3f8d8ce471275\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:58:08.990518 containerd[1593]: time="2026-01-24T00:58:08.990438015Z" level=info msg="CreateContainer within sandbox \"2d0ee328868e6aee346dfbe03cb599e9e1787088cbc0cde971c3f8d8ce471275\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c8bbe24183d17b548bcb6408b9239174c0a10cc1b90eb1561dd9addf9fd20c39\"" Jan 24 00:58:08.992622 containerd[1593]: time="2026-01-24T00:58:08.992451983Z" level=info msg="StartContainer for \"c8bbe24183d17b548bcb6408b9239174c0a10cc1b90eb1561dd9addf9fd20c39\"" Jan 24 00:58:09.136353 containerd[1593]: time="2026-01-24T00:58:09.136029409Z" level=info msg="StartContainer for \"c8bbe24183d17b548bcb6408b9239174c0a10cc1b90eb1561dd9addf9fd20c39\" returns successfully" Jan 24 00:58:09.324049 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:58:09.324190 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:58:09.461646 containerd[1593]: time="2026-01-24T00:58:09.459377323Z" level=info msg="StopPodSandbox for \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\"" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.623 [INFO][3896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.623 [INFO][3896] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" iface="eth0" netns="/var/run/netns/cni-942c1586-d275-d342-be8b-059329738816" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.624 [INFO][3896] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" iface="eth0" netns="/var/run/netns/cni-942c1586-d275-d342-be8b-059329738816" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.628 [INFO][3896] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" iface="eth0" netns="/var/run/netns/cni-942c1586-d275-d342-be8b-059329738816" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.628 [INFO][3896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.628 [INFO][3896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.781 [INFO][3912] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.782 [INFO][3912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.782 [INFO][3912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.795 [WARNING][3912] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.795 [INFO][3912] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.798 [INFO][3912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:09.808590 containerd[1593]: 2026-01-24 00:58:09.802 [INFO][3896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:09.813128 containerd[1593]: time="2026-01-24T00:58:09.810198230Z" level=info msg="TearDown network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\" successfully" Jan 24 00:58:09.813128 containerd[1593]: time="2026-01-24T00:58:09.810224384Z" level=info msg="StopPodSandbox for \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\" returns successfully" Jan 24 00:58:09.815424 systemd[1]: run-netns-cni\x2d942c1586\x2dd275\x2dd342\x2dbe8b\x2d059329738816.mount: Deactivated successfully. Jan 24 00:58:09.903773 kubelet[2664]: E0124 00:58:09.903438 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:09.908194 kubelet[2664]: I0124 00:58:09.906895 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfx7w\" (UniqueName: \"kubernetes.io/projected/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-kube-api-access-jfx7w\") pod \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\" (UID: \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\") " Jan 24 00:58:09.908194 kubelet[2664]: I0124 00:58:09.907101 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-backend-key-pair\") pod \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\" (UID: \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\") " Jan 24 00:58:09.908194 kubelet[2664]: I0124 00:58:09.907137 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-ca-bundle\") pod \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\" (UID: \"d8d3e7e8-8bbd-4faf-8616-34b107036aa7\") " Jan 24 00:58:09.908194 kubelet[2664]: I0124 00:58:09.908033 2664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d8d3e7e8-8bbd-4faf-8616-34b107036aa7" (UID: "d8d3e7e8-8bbd-4faf-8616-34b107036aa7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:58:09.926541 kubelet[2664]: I0124 00:58:09.926409 2664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d8d3e7e8-8bbd-4faf-8616-34b107036aa7" (UID: "d8d3e7e8-8bbd-4faf-8616-34b107036aa7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:58:09.927801 systemd[1]: var-lib-kubelet-pods-d8d3e7e8\x2d8bbd\x2d4faf\x2d8616\x2d34b107036aa7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:58:09.933442 systemd[1]: var-lib-kubelet-pods-d8d3e7e8\x2d8bbd\x2d4faf\x2d8616\x2d34b107036aa7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfx7w.mount: Deactivated successfully. Jan 24 00:58:09.936111 kubelet[2664]: I0124 00:58:09.935783 2664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-kube-api-access-jfx7w" (OuterVolumeSpecName: "kube-api-access-jfx7w") pod "d8d3e7e8-8bbd-4faf-8616-34b107036aa7" (UID: "d8d3e7e8-8bbd-4faf-8616-34b107036aa7"). InnerVolumeSpecName "kube-api-access-jfx7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:58:09.937727 kubelet[2664]: I0124 00:58:09.937676 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6qxq4" podStartSLOduration=1.614103419 podStartE2EDuration="15.937659692s" podCreationTimestamp="2026-01-24 00:57:54 +0000 UTC" firstStartedPulling="2026-01-24 00:57:54.617036414 +0000 UTC m=+20.259182210" lastFinishedPulling="2026-01-24 00:58:08.940592687 +0000 UTC m=+34.582738483" observedRunningTime="2026-01-24 00:58:09.937048023 +0000 UTC m=+35.579193839" watchObservedRunningTime="2026-01-24 00:58:09.937659692 +0000 UTC m=+35.579805519" Jan 24 00:58:10.008650 kubelet[2664]: I0124 00:58:10.008616 2664 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:58:10.009165 kubelet[2664]: I0124 00:58:10.008796 2664 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:58:10.009165 kubelet[2664]: I0124 00:58:10.008814 2664 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfx7w\" (UniqueName: \"kubernetes.io/projected/d8d3e7e8-8bbd-4faf-8616-34b107036aa7-kube-api-access-jfx7w\") on node \"localhost\" DevicePath \"\"" Jan 24 00:58:10.310769 kubelet[2664]: I0124 00:58:10.310609 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwfc9\" (UniqueName: \"kubernetes.io/projected/c47f04db-70a5-460c-8f3b-ca0ed30f8b3a-kube-api-access-rwfc9\") pod \"whisker-689f665f8b-hwn7z\" (UID: \"c47f04db-70a5-460c-8f3b-ca0ed30f8b3a\") " pod="calico-system/whisker-689f665f8b-hwn7z" Jan 24 00:58:10.310769 kubelet[2664]: I0124 00:58:10.310694 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47f04db-70a5-460c-8f3b-ca0ed30f8b3a-whisker-ca-bundle\") pod \"whisker-689f665f8b-hwn7z\" (UID: \"c47f04db-70a5-460c-8f3b-ca0ed30f8b3a\") " pod="calico-system/whisker-689f665f8b-hwn7z" Jan 24 00:58:10.310769 kubelet[2664]: I0124 00:58:10.310717 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c47f04db-70a5-460c-8f3b-ca0ed30f8b3a-whisker-backend-key-pair\") pod \"whisker-689f665f8b-hwn7z\" (UID: \"c47f04db-70a5-460c-8f3b-ca0ed30f8b3a\") " pod="calico-system/whisker-689f665f8b-hwn7z" Jan 24 00:58:10.528183 kubelet[2664]: I0124 00:58:10.528040 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d3e7e8-8bbd-4faf-8616-34b107036aa7" path="/var/lib/kubelet/pods/d8d3e7e8-8bbd-4faf-8616-34b107036aa7/volumes" Jan 24 00:58:10.571747 containerd[1593]: time="2026-01-24T00:58:10.571590965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-689f665f8b-hwn7z,Uid:c47f04db-70a5-460c-8f3b-ca0ed30f8b3a,Namespace:calico-system,Attempt:0,}" Jan 24 00:58:10.792232 systemd-networkd[1258]: cali6c7eafd650e: Link UP Jan 24 00:58:10.794743 systemd-networkd[1258]: cali6c7eafd650e: Gained carrier Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.647 [INFO][3938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.668 [INFO][3938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--689f665f8b--hwn7z-eth0 whisker-689f665f8b- calico-system c47f04db-70a5-460c-8f3b-ca0ed30f8b3a 895 0 2026-01-24 00:58:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:689f665f8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-689f665f8b-hwn7z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6c7eafd650e [] [] }} ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.669 [INFO][3938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.714 [INFO][3952] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" HandleID="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Workload="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.715 [INFO][3952] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" HandleID="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Workload="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-689f665f8b-hwn7z", "timestamp":"2026-01-24 00:58:10.714069637 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.715 [INFO][3952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.715 [INFO][3952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.715 [INFO][3952] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.724 [INFO][3952] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.736 [INFO][3952] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.744 [INFO][3952] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.747 [INFO][3952] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.751 [INFO][3952] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.751 [INFO][3952] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.755 [INFO][3952] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.759 [INFO][3952] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.765 [INFO][3952] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.766 [INFO][3952] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" host="localhost" Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.766 [INFO][3952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:10.809010 containerd[1593]: 2026-01-24 00:58:10.766 [INFO][3952] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" HandleID="k8s-pod-network.b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Workload="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" Jan 24 00:58:10.810070 containerd[1593]: 2026-01-24 00:58:10.770 [INFO][3938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--689f665f8b--hwn7z-eth0", GenerateName:"whisker-689f665f8b-", Namespace:"calico-system", SelfLink:"", UID:"c47f04db-70a5-460c-8f3b-ca0ed30f8b3a", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"689f665f8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-689f665f8b-hwn7z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c7eafd650e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:10.810070 containerd[1593]: 2026-01-24 00:58:10.770 [INFO][3938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" Jan 24 00:58:10.810070 containerd[1593]: 2026-01-24 00:58:10.770 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c7eafd650e ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" Jan 24 00:58:10.810070 containerd[1593]: 2026-01-24 00:58:10.791 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" Jan 24 00:58:10.810070 containerd[1593]: 2026-01-24 00:58:10.792 [INFO][3938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--689f665f8b--hwn7z-eth0", GenerateName:"whisker-689f665f8b-", Namespace:"calico-system", SelfLink:"", UID:"c47f04db-70a5-460c-8f3b-ca0ed30f8b3a", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"689f665f8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d", Pod:"whisker-689f665f8b-hwn7z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c7eafd650e", MAC:"22:3c:2b:a7:24:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:10.810070 containerd[1593]: 2026-01-24 00:58:10.804 [INFO][3938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d" Namespace="calico-system" Pod="whisker-689f665f8b-hwn7z" WorkloadEndpoint="localhost-k8s-whisker--689f665f8b--hwn7z-eth0" Jan 24 00:58:10.868598 containerd[1593]: time="2026-01-24T00:58:10.867792658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:10.868598 containerd[1593]: time="2026-01-24T00:58:10.867947437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:10.868598 containerd[1593]: time="2026-01-24T00:58:10.867963890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:10.868598 containerd[1593]: time="2026-01-24T00:58:10.868072152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:10.909760 kubelet[2664]: I0124 00:58:10.909720 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:58:10.911677 kubelet[2664]: E0124 00:58:10.911596 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:10.924073 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:11.055490 containerd[1593]: time="2026-01-24T00:58:11.054711936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-689f665f8b-hwn7z,Uid:c47f04db-70a5-460c-8f3b-ca0ed30f8b3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0308b0e52f1104fbe93b2e932097af95f52f707f8d81bed8e9b4a663864145d\"" Jan 24 00:58:11.066794 containerd[1593]: time="2026-01-24T00:58:11.065440967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:11.171908 containerd[1593]: time="2026-01-24T00:58:11.171763249Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:11.205795 containerd[1593]: time="2026-01-24T00:58:11.180758843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:11.205795 containerd[1593]: time="2026-01-24T00:58:11.181123347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:58:11.207376 kubelet[2664]: E0124 00:58:11.207115 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:11.207376 kubelet[2664]: E0124 00:58:11.207329 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:11.210501 kubelet[2664]: E0124 00:58:11.210048 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f89e7f4884e54659bfd281d3a9e6919f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwfc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-689f665f8b-hwn7z_calico-system(c47f04db-70a5-460c-8f3b-ca0ed30f8b3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:11.216622 containerd[1593]: time="2026-01-24T00:58:11.216485887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:11.288553 containerd[1593]: time="2026-01-24T00:58:11.288440183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:11.290509 containerd[1593]: time="2026-01-24T00:58:11.290388688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:11.290509 containerd[1593]: time="2026-01-24T00:58:11.290480026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:11.290778 kubelet[2664]: E0124 00:58:11.290708 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:11.290841 kubelet[2664]: E0124 00:58:11.290795 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:11.291046 kubelet[2664]: E0124 00:58:11.290947 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwfc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-689f665f8b-hwn7z_calico-system(c47f04db-70a5-460c-8f3b-ca0ed30f8b3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:11.292370 kubelet[2664]: E0124 00:58:11.292142 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-689f665f8b-hwn7z" podUID="c47f04db-70a5-460c-8f3b-ca0ed30f8b3a" Jan 24 00:58:11.918056 kubelet[2664]: E0124 00:58:11.917962 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-689f665f8b-hwn7z" podUID="c47f04db-70a5-460c-8f3b-ca0ed30f8b3a" Jan 24 00:58:12.134673 systemd-networkd[1258]: cali6c7eafd650e: Gained IPv6LL Jan 24 00:58:12.918404 kubelet[2664]: E0124 00:58:12.918211 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-689f665f8b-hwn7z" podUID="c47f04db-70a5-460c-8f3b-ca0ed30f8b3a" Jan 24 00:58:14.531779 containerd[1593]: time="2026-01-24T00:58:14.531453389Z" level=info msg="StopPodSandbox for \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\"" Jan 24 00:58:14.532902 containerd[1593]: time="2026-01-24T00:58:14.532361585Z" level=info msg="StopPodSandbox for \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\"" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.657 [INFO][4202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.657 [INFO][4202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" iface="eth0" netns="/var/run/netns/cni-a2765b6a-b101-35ec-9dda-3d713c111dd3" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.657 [INFO][4202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" iface="eth0" netns="/var/run/netns/cni-a2765b6a-b101-35ec-9dda-3d713c111dd3" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.659 [INFO][4202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" iface="eth0" netns="/var/run/netns/cni-a2765b6a-b101-35ec-9dda-3d713c111dd3" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.659 [INFO][4202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.659 [INFO][4202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.803 [INFO][4226] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.805 [INFO][4226] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.806 [INFO][4226] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.831 [WARNING][4226] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.831 [INFO][4226] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.837 [INFO][4226] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:14.850426 containerd[1593]: 2026-01-24 00:58:14.844 [INFO][4202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:14.856614 containerd[1593]: time="2026-01-24T00:58:14.856532463Z" level=info msg="TearDown network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\" successfully" Jan 24 00:58:14.856676 containerd[1593]: time="2026-01-24T00:58:14.856616984Z" level=info msg="StopPodSandbox for \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\" returns successfully" Jan 24 00:58:14.858599 systemd[1]: run-netns-cni\x2da2765b6a\x2db101\x2d35ec\x2d9dda\x2d3d713c111dd3.mount: Deactivated successfully. Jan 24 00:58:14.859856 containerd[1593]: time="2026-01-24T00:58:14.859557926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-7lkh6,Uid:0f4fb867-3038-43f2-9206-1156c12b1931,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.691 [INFO][4203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.694 [INFO][4203] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" iface="eth0" netns="/var/run/netns/cni-b4da4ec0-6c4c-8ca4-e7fb-8d60dc6715b8" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.702 [INFO][4203] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" iface="eth0" netns="/var/run/netns/cni-b4da4ec0-6c4c-8ca4-e7fb-8d60dc6715b8" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.708 [INFO][4203] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" iface="eth0" netns="/var/run/netns/cni-b4da4ec0-6c4c-8ca4-e7fb-8d60dc6715b8" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.708 [INFO][4203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.709 [INFO][4203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.818 [INFO][4236] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.819 [INFO][4236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.837 [INFO][4236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.847 [WARNING][4236] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.847 [INFO][4236] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.853 [INFO][4236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:14.868446 containerd[1593]: 2026-01-24 00:58:14.862 [INFO][4203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:14.871659 containerd[1593]: time="2026-01-24T00:58:14.871584476Z" level=info msg="TearDown network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\" successfully" Jan 24 00:58:14.871727 containerd[1593]: time="2026-01-24T00:58:14.871667114Z" level=info msg="StopPodSandbox for \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\" returns successfully" Jan 24 00:58:14.872765 kubelet[2664]: E0124 00:58:14.872441 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:14.873165 containerd[1593]: time="2026-01-24T00:58:14.873083682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4d65b,Uid:a40b7ad0-87c5-48cf-aae6-708b12427df9,Namespace:kube-system,Attempt:1,}" Jan 24 00:58:14.877707 systemd[1]: run-netns-cni\x2db4da4ec0\x2d6c4c\x2d8ca4\x2de7fb\x2d8d60dc6715b8.mount: Deactivated successfully. Jan 24 00:58:15.115439 systemd-networkd[1258]: cali12f9114989b: Link UP Jan 24 00:58:15.119213 systemd-networkd[1258]: cali12f9114989b: Gained carrier Jan 24 00:58:15.125858 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:34336.service - OpenSSH per-connection server daemon (10.0.0.1:34336). Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:14.949 [INFO][4247] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:14.967 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0 calico-apiserver-89796fd66- calico-apiserver 0f4fb867-3038-43f2-9206-1156c12b1931 936 0 2026-01-24 00:57:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:89796fd66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-89796fd66-7lkh6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12f9114989b [] [] }} ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:14.967 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.041 [INFO][4274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" HandleID="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.041 [INFO][4274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" HandleID="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-89796fd66-7lkh6", "timestamp":"2026-01-24 00:58:15.041642498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.041 [INFO][4274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.042 [INFO][4274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.042 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.051 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.063 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.073 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.077 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.081 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.081 [INFO][4274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.084 [INFO][4274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7 Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.091 [INFO][4274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.099 [INFO][4274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.099 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" host="localhost" Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.099 [INFO][4274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:15.139788 containerd[1593]: 2026-01-24 00:58:15.099 [INFO][4274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" HandleID="k8s-pod-network.8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:15.140685 containerd[1593]: 2026-01-24 00:58:15.109 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4fb867-3038-43f2-9206-1156c12b1931", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-89796fd66-7lkh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12f9114989b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:15.140685 containerd[1593]: 2026-01-24 00:58:15.110 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:15.140685 containerd[1593]: 2026-01-24 00:58:15.110 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12f9114989b ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:15.140685 containerd[1593]: 2026-01-24 00:58:15.121 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:15.140685 containerd[1593]: 2026-01-24 00:58:15.121 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4fb867-3038-43f2-9206-1156c12b1931", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7", Pod:"calico-apiserver-89796fd66-7lkh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12f9114989b", MAC:"5e:d6:43:73:c7:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:15.140685 containerd[1593]: 2026-01-24 00:58:15.135 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-7lkh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:15.196762 containerd[1593]: time="2026-01-24T00:58:15.196440369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:15.196762 containerd[1593]: time="2026-01-24T00:58:15.196653581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:15.196762 containerd[1593]: time="2026-01-24T00:58:15.196701719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:15.197049 containerd[1593]: time="2026-01-24T00:58:15.196832905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:15.230571 systemd-networkd[1258]: cali8abcc704b75: Link UP Jan 24 00:58:15.231393 systemd-networkd[1258]: cali8abcc704b75: Gained carrier Jan 24 00:58:15.247111 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 34336 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:15.248542 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:15.263232 systemd-logind[1562]: New session 8 of user core. Jan 24 00:58:15.263425 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:15.271722 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:14.976 [INFO][4257] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:14.996 [INFO][4257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4d65b-eth0 coredns-668d6bf9bc- kube-system a40b7ad0-87c5-48cf-aae6-708b12427df9 939 0 2026-01-24 00:57:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4d65b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8abcc704b75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:14.996 [INFO][4257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.053 [INFO][4283] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" HandleID="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.054 [INFO][4283] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" HandleID="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311d90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4d65b", "timestamp":"2026-01-24 00:58:15.053647087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.054 [INFO][4283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.099 [INFO][4283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.099 [INFO][4283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.158 [INFO][4283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.167 [INFO][4283] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.181 [INFO][4283] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.185 [INFO][4283] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.192 [INFO][4283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.192 [INFO][4283] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.195 [INFO][4283] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.202 [INFO][4283] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.213 [INFO][4283] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.213 [INFO][4283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" host="localhost" Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.213 [INFO][4283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:15.280166 containerd[1593]: 2026-01-24 00:58:15.213 [INFO][4283] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" HandleID="k8s-pod-network.6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:15.281414 containerd[1593]: 2026-01-24 00:58:15.219 [INFO][4257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4d65b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a40b7ad0-87c5-48cf-aae6-708b12427df9", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4d65b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abcc704b75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:15.281414 containerd[1593]: 2026-01-24 00:58:15.220 [INFO][4257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:15.281414 containerd[1593]: 2026-01-24 00:58:15.220 [INFO][4257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8abcc704b75 ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:15.281414 containerd[1593]: 2026-01-24 00:58:15.229 [INFO][4257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:15.281414 containerd[1593]: 2026-01-24 00:58:15.230 [INFO][4257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4d65b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a40b7ad0-87c5-48cf-aae6-708b12427df9", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df", Pod:"coredns-668d6bf9bc-4d65b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abcc704b75", MAC:"a2:20:8d:28:64:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:15.281414 containerd[1593]: 2026-01-24 00:58:15.266 [INFO][4257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df" Namespace="kube-system" Pod="coredns-668d6bf9bc-4d65b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:15.334823 containerd[1593]: time="2026-01-24T00:58:15.332640545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:15.334823 containerd[1593]: time="2026-01-24T00:58:15.332706809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:15.334823 containerd[1593]: time="2026-01-24T00:58:15.332718452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:15.334823 containerd[1593]: time="2026-01-24T00:58:15.332901384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:15.342157 containerd[1593]: time="2026-01-24T00:58:15.341943997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-7lkh6,Uid:0f4fb867-3038-43f2-9206-1156c12b1931,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7\"" Jan 24 00:58:15.348436 containerd[1593]: time="2026-01-24T00:58:15.348174589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:15.902439 containerd[1593]: time="2026-01-24T00:58:15.901488152Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:15.937917 containerd[1593]: time="2026-01-24T00:58:15.903219363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:15.937917 containerd[1593]: time="2026-01-24T00:58:15.903404869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:15.937978 kubelet[2664]: E0124 00:58:15.904382 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:15.937978 kubelet[2664]: E0124 00:58:15.904429 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:15.937978 kubelet[2664]: E0124 00:58:15.904537 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5dh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-89796fd66-7lkh6_calico-apiserver(0f4fb867-3038-43f2-9206-1156c12b1931): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:15.937978 kubelet[2664]: E0124 00:58:15.907379 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:15.937978 kubelet[2664]: E0124 00:58:15.929194 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:16.047645 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:16.101225 sshd[4293]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:16.110578 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:58:16.113821 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:34336.service: Deactivated successfully. Jan 24 00:58:16.121800 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:58:16.124374 systemd-logind[1562]: Removed session 8. Jan 24 00:58:16.134541 containerd[1593]: time="2026-01-24T00:58:16.134500405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4d65b,Uid:a40b7ad0-87c5-48cf-aae6-708b12427df9,Namespace:kube-system,Attempt:1,} returns sandbox id \"6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df\"" Jan 24 00:58:16.138741 kubelet[2664]: E0124 00:58:16.138691 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:16.142035 containerd[1593]: time="2026-01-24T00:58:16.141728052Z" level=info msg="CreateContainer within sandbox \"6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:58:16.190891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470154895.mount: Deactivated successfully. Jan 24 00:58:16.203838 containerd[1593]: time="2026-01-24T00:58:16.201714318Z" level=info msg="CreateContainer within sandbox \"6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e10f0f1656d4c324004bd9692db68f7c034ce4309109c41cd8be92845fc315e\"" Jan 24 00:58:16.203838 containerd[1593]: time="2026-01-24T00:58:16.203041841Z" level=info msg="StartContainer for \"8e10f0f1656d4c324004bd9692db68f7c034ce4309109c41cd8be92845fc315e\"" Jan 24 00:58:16.295506 systemd-networkd[1258]: cali8abcc704b75: Gained IPv6LL Jan 24 00:58:16.306359 containerd[1593]: time="2026-01-24T00:58:16.306062992Z" level=info msg="StartContainer for \"8e10f0f1656d4c324004bd9692db68f7c034ce4309109c41cd8be92845fc315e\" returns successfully" Jan 24 00:58:16.422840 systemd-networkd[1258]: cali12f9114989b: Gained IPv6LL Jan 24 00:58:16.525114 containerd[1593]: time="2026-01-24T00:58:16.524617355Z" level=info msg="StopPodSandbox for \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\"" Jan 24 00:58:16.527136 containerd[1593]: time="2026-01-24T00:58:16.525801125Z" level=info msg="StopPodSandbox for \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\"" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.618 [INFO][4494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.618 [INFO][4494] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" iface="eth0" netns="/var/run/netns/cni-219071c4-a29d-a347-f67b-1c93e376a567" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.620 [INFO][4494] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" iface="eth0" netns="/var/run/netns/cni-219071c4-a29d-a347-f67b-1c93e376a567" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.622 [INFO][4494] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" iface="eth0" netns="/var/run/netns/cni-219071c4-a29d-a347-f67b-1c93e376a567" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.622 [INFO][4494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.626 [INFO][4494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.687 [INFO][4510] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.688 [INFO][4510] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.688 [INFO][4510] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.704 [WARNING][4510] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.704 [INFO][4510] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.708 [INFO][4510] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:16.721132 containerd[1593]: 2026-01-24 00:58:16.716 [INFO][4494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:16.723377 containerd[1593]: time="2026-01-24T00:58:16.722503195Z" level=info msg="TearDown network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\" successfully" Jan 24 00:58:16.723377 containerd[1593]: time="2026-01-24T00:58:16.722544860Z" level=info msg="StopPodSandbox for \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\" returns successfully" Jan 24 00:58:16.723687 kubelet[2664]: E0124 00:58:16.723582 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:16.724817 containerd[1593]: time="2026-01-24T00:58:16.724133184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9c2b,Uid:3989f243-b175-4180-9b60-b4d8b86d76d7,Namespace:kube-system,Attempt:1,}" Jan 24 00:58:16.743075 kubelet[2664]: I0124 00:58:16.742987 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:58:16.743615 kubelet[2664]: E0124 00:58:16.743571 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.633 [INFO][4493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.633 [INFO][4493] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" iface="eth0" netns="/var/run/netns/cni-7b5570d8-8146-7b53-cda5-a5c67b3b6dc6" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.634 [INFO][4493] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" iface="eth0" netns="/var/run/netns/cni-7b5570d8-8146-7b53-cda5-a5c67b3b6dc6" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.636 [INFO][4493] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" iface="eth0" netns="/var/run/netns/cni-7b5570d8-8146-7b53-cda5-a5c67b3b6dc6" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.637 [INFO][4493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.637 [INFO][4493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.734 [INFO][4516] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.735 [INFO][4516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.735 [INFO][4516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.756 [WARNING][4516] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.756 [INFO][4516] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.762 [INFO][4516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:16.809703 containerd[1593]: 2026-01-24 00:58:16.771 [INFO][4493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:16.810107 containerd[1593]: time="2026-01-24T00:58:16.810074563Z" level=info msg="TearDown network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\" successfully" Jan 24 00:58:16.810107 containerd[1593]: time="2026-01-24T00:58:16.810099824Z" level=info msg="StopPodSandbox for \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\" returns successfully" Jan 24 00:58:16.814443 containerd[1593]: time="2026-01-24T00:58:16.813888094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-rbmf5,Uid:0b98a207-ae40-4df7-81ed-b24949ca269a,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:58:16.861775 systemd[1]: run-netns-cni\x2d7b5570d8\x2d8146\x2d7b53\x2dcda5\x2da5c67b3b6dc6.mount: Deactivated successfully. Jan 24 00:58:16.862012 systemd[1]: run-netns-cni\x2d219071c4\x2da29d\x2da347\x2df67b\x2d1c93e376a567.mount: Deactivated successfully. Jan 24 00:58:16.941208 kubelet[2664]: E0124 00:58:16.940915 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:16.943078 kubelet[2664]: E0124 00:58:16.942618 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:16.970186 systemd[1]: run-containerd-runc-k8s.io-c8bbe24183d17b548bcb6408b9239174c0a10cc1b90eb1561dd9addf9fd20c39-runc.8WACY4.mount: Deactivated successfully. Jan 24 00:58:16.973107 kubelet[2664]: I0124 00:58:16.972956 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4d65b" podStartSLOduration=37.972831622 podStartE2EDuration="37.972831622s" podCreationTimestamp="2026-01-24 00:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:58:16.965841038 +0000 UTC m=+42.607986834" watchObservedRunningTime="2026-01-24 00:58:16.972831622 +0000 UTC m=+42.614977418" Jan 24 00:58:17.113228 systemd-networkd[1258]: calia055eb304fb: Link UP Jan 24 00:58:17.124339 systemd-networkd[1258]: calia055eb304fb: Gained carrier Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.811 [INFO][4526] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.831 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0 coredns-668d6bf9bc- kube-system 3989f243-b175-4180-9b60-b4d8b86d76d7 994 0 2026-01-24 00:57:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s9c2b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia055eb304fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.832 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.951 [INFO][4541] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" HandleID="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.952 [INFO][4541] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" HandleID="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005846d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s9c2b", "timestamp":"2026-01-24 00:58:16.951669496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.953 [INFO][4541] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.954 [INFO][4541] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.955 [INFO][4541] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.975 [INFO][4541] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:16.997 [INFO][4541] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.010 [INFO][4541] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.021 [INFO][4541] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.028 [INFO][4541] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.028 [INFO][4541] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.035 [INFO][4541] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106 Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.044 [INFO][4541] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.063 [INFO][4541] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.063 [INFO][4541] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" host="localhost" Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.063 [INFO][4541] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:17.155786 containerd[1593]: 2026-01-24 00:58:17.063 [INFO][4541] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" HandleID="k8s-pod-network.92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:17.157212 containerd[1593]: 2026-01-24 00:58:17.089 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3989f243-b175-4180-9b60-b4d8b86d76d7", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s9c2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia055eb304fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:17.157212 containerd[1593]: 2026-01-24 00:58:17.089 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:17.157212 containerd[1593]: 2026-01-24 00:58:17.089 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia055eb304fb ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:17.157212 containerd[1593]: 2026-01-24 00:58:17.128 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:17.157212 containerd[1593]: 2026-01-24 00:58:17.131 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3989f243-b175-4180-9b60-b4d8b86d76d7", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106", Pod:"coredns-668d6bf9bc-s9c2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia055eb304fb", MAC:"5a:8e:ef:ef:e6:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:17.157212 containerd[1593]: 2026-01-24 00:58:17.149 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9c2b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:17.230437 containerd[1593]: time="2026-01-24T00:58:17.229832482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:17.230437 containerd[1593]: time="2026-01-24T00:58:17.229897312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:17.230437 containerd[1593]: time="2026-01-24T00:58:17.229907723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:17.230437 containerd[1593]: time="2026-01-24T00:58:17.229999367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:17.370671 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:17.381909 systemd-networkd[1258]: cali6a6a76a2a47: Link UP Jan 24 00:58:17.382407 systemd-networkd[1258]: cali6a6a76a2a47: Gained carrier Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:16.914 [INFO][4538] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:16.959 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0 calico-apiserver-89796fd66- calico-apiserver 0b98a207-ae40-4df7-81ed-b24949ca269a 997 0 2026-01-24 00:57:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:89796fd66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-89796fd66-rbmf5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a6a76a2a47 [] [] }} ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:16.959 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.135 [INFO][4578] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" HandleID="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.137 [INFO][4578] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" HandleID="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b9b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-89796fd66-rbmf5", "timestamp":"2026-01-24 00:58:17.135985775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.137 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.138 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.138 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.163 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.195 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.224 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.233 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.242 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.242 [INFO][4578] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.255 [INFO][4578] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24 Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.275 [INFO][4578] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.291 [INFO][4578] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.292 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" host="localhost" Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.292 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:17.415208 containerd[1593]: 2026-01-24 00:58:17.292 [INFO][4578] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" HandleID="k8s-pod-network.558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:17.422400 containerd[1593]: 2026-01-24 00:58:17.366 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b98a207-ae40-4df7-81ed-b24949ca269a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-89796fd66-rbmf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a6a76a2a47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:17.422400 containerd[1593]: 2026-01-24 00:58:17.378 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:17.422400 containerd[1593]: 2026-01-24 00:58:17.378 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a6a76a2a47 ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:17.422400 containerd[1593]: 2026-01-24 00:58:17.382 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:17.422400 containerd[1593]: 2026-01-24 00:58:17.383 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b98a207-ae40-4df7-81ed-b24949ca269a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24", Pod:"calico-apiserver-89796fd66-rbmf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a6a76a2a47", MAC:"2e:7e:57:5d:f5:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:17.422400 containerd[1593]: 2026-01-24 00:58:17.406 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24" Namespace="calico-apiserver" Pod="calico-apiserver-89796fd66-rbmf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:17.493138 containerd[1593]: time="2026-01-24T00:58:17.492578204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9c2b,Uid:3989f243-b175-4180-9b60-b4d8b86d76d7,Namespace:kube-system,Attempt:1,} returns sandbox id \"92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106\"" Jan 24 00:58:17.494915 kubelet[2664]: E0124 00:58:17.494746 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:17.509171 containerd[1593]: time="2026-01-24T00:58:17.509039581Z" level=info msg="CreateContainer within sandbox \"92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:58:17.529709 containerd[1593]: time="2026-01-24T00:58:17.529475912Z" level=info msg="StopPodSandbox for \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\"" Jan 24 00:58:17.533814 containerd[1593]: time="2026-01-24T00:58:17.531718198Z" level=info msg="StopPodSandbox for \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\"" Jan 24 00:58:17.558628 containerd[1593]: time="2026-01-24T00:58:17.557948663Z" level=info msg="CreateContainer within sandbox \"92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98e65e092ac3b5fe663f8a25655abc4e6768e984d0f244e9281a7ad450815cbc\"" Jan 24 00:58:17.576474 containerd[1593]: time="2026-01-24T00:58:17.575781083Z" level=info msg="StartContainer for \"98e65e092ac3b5fe663f8a25655abc4e6768e984d0f244e9281a7ad450815cbc\"" Jan 24 00:58:17.621997 containerd[1593]: time="2026-01-24T00:58:17.617966987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:17.621997 containerd[1593]: time="2026-01-24T00:58:17.620674845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:17.621997 containerd[1593]: time="2026-01-24T00:58:17.620744036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:17.626074 containerd[1593]: time="2026-01-24T00:58:17.623690105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:17.734069 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:17.841143 containerd[1593]: time="2026-01-24T00:58:17.840850090Z" level=info msg="StartContainer for \"98e65e092ac3b5fe663f8a25655abc4e6768e984d0f244e9281a7ad450815cbc\" returns successfully" Jan 24 00:58:17.911709 containerd[1593]: time="2026-01-24T00:58:17.910956319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89796fd66-rbmf5,Uid:0b98a207-ae40-4df7-81ed-b24949ca269a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24\"" Jan 24 00:58:17.915698 containerd[1593]: time="2026-01-24T00:58:17.915476017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.813 [INFO][4705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.814 [INFO][4705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" iface="eth0" netns="/var/run/netns/cni-63e55730-7ffc-1ca6-7fe5-4f6013cf6060" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.815 [INFO][4705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" iface="eth0" netns="/var/run/netns/cni-63e55730-7ffc-1ca6-7fe5-4f6013cf6060" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.816 [INFO][4705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" iface="eth0" netns="/var/run/netns/cni-63e55730-7ffc-1ca6-7fe5-4f6013cf6060" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.816 [INFO][4705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.816 [INFO][4705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.909 [INFO][4808] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.910 [INFO][4808] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.910 [INFO][4808] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.922 [WARNING][4808] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.922 [INFO][4808] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.926 [INFO][4808] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:17.937686 containerd[1593]: 2026-01-24 00:58:17.931 [INFO][4705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:17.940346 containerd[1593]: time="2026-01-24T00:58:17.939352976Z" level=info msg="TearDown network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\" successfully" Jan 24 00:58:17.940346 containerd[1593]: time="2026-01-24T00:58:17.939377897Z" level=info msg="StopPodSandbox for \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\" returns successfully" Jan 24 00:58:17.945488 systemd[1]: run-netns-cni\x2d63e55730\x2d7ffc\x2d1ca6\x2d7fe5\x2d4f6013cf6060.mount: Deactivated successfully. Jan 24 00:58:17.946957 containerd[1593]: time="2026-01-24T00:58:17.946186944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gdj7,Uid:b7b68612-f671-4faf-9c72-eb6b0593666c,Namespace:calico-system,Attempt:1,}" Jan 24 00:58:17.959838 kubelet[2664]: E0124 00:58:17.958461 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.805 [INFO][4721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.807 [INFO][4721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" iface="eth0" netns="/var/run/netns/cni-7862d72a-48e9-a77d-bf92-1143f8bbced1" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.813 [INFO][4721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" iface="eth0" netns="/var/run/netns/cni-7862d72a-48e9-a77d-bf92-1143f8bbced1" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.817 [INFO][4721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" iface="eth0" netns="/var/run/netns/cni-7862d72a-48e9-a77d-bf92-1143f8bbced1" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.817 [INFO][4721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.817 [INFO][4721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.910 [INFO][4810] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.910 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.927 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.936 [WARNING][4810] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.936 [INFO][4810] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.947 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:17.966371 containerd[1593]: 2026-01-24 00:58:17.952 [INFO][4721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:17.971527 systemd[1]: run-netns-cni\x2d7862d72a\x2d48e9\x2da77d\x2dbf92\x2d1143f8bbced1.mount: Deactivated successfully. Jan 24 00:58:17.975158 containerd[1593]: time="2026-01-24T00:58:17.972932159Z" level=info msg="TearDown network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\" successfully" Jan 24 00:58:17.975158 containerd[1593]: time="2026-01-24T00:58:17.972960497Z" level=info msg="StopPodSandbox for \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\" returns successfully" Jan 24 00:58:17.993851 containerd[1593]: time="2026-01-24T00:58:17.992335610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8c95968-8n99c,Uid:3c0ff763-a567-4414-bf09-8f7990c6e756,Namespace:calico-system,Attempt:1,}" Jan 24 00:58:18.017824 kubelet[2664]: I0124 00:58:18.010678 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s9c2b" podStartSLOduration=39.010660944 podStartE2EDuration="39.010660944s" podCreationTimestamp="2026-01-24 00:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:58:17.992175478 +0000 UTC m=+43.634321284" watchObservedRunningTime="2026-01-24 00:58:18.010660944 +0000 UTC m=+43.652806750" Jan 24 00:58:18.018086 containerd[1593]: time="2026-01-24T00:58:18.017869573Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:18.021766 kubelet[2664]: E0124 00:58:18.021732 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:18.031888 containerd[1593]: time="2026-01-24T00:58:18.031169011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:18.035785 containerd[1593]: time="2026-01-24T00:58:18.035557866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:18.036229 kubelet[2664]: E0124 00:58:18.036182 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:18.036566 kubelet[2664]: E0124 00:58:18.036541 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:18.041161 kubelet[2664]: E0124 00:58:18.039776 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw8pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-89796fd66-rbmf5_calico-apiserver(0b98a207-ae40-4df7-81ed-b24949ca269a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:18.041161 kubelet[2664]: E0124 00:58:18.040895 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" podUID="0b98a207-ae40-4df7-81ed-b24949ca269a" Jan 24 00:58:18.272673 systemd-networkd[1258]: cali25b0ff5534d: Link UP Jan 24 00:58:18.275649 systemd-networkd[1258]: cali25b0ff5534d: Gained carrier Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.105 [INFO][4836] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.125 [INFO][4836] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7gdj7-eth0 csi-node-driver- calico-system b7b68612-f671-4faf-9c72-eb6b0593666c 1035 0 2026-01-24 00:57:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7gdj7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali25b0ff5534d [] [] }} ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.125 [INFO][4836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.206 [INFO][4865] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" HandleID="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.206 [INFO][4865] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" HandleID="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7gdj7", "timestamp":"2026-01-24 00:58:18.206602247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.206 [INFO][4865] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.206 [INFO][4865] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.206 [INFO][4865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.221 [INFO][4865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.233 [INFO][4865] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.240 [INFO][4865] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.243 [INFO][4865] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.246 [INFO][4865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.246 [INFO][4865] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.249 [INFO][4865] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38 Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.254 [INFO][4865] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.262 [INFO][4865] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.262 [INFO][4865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" host="localhost" Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.263 [INFO][4865] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:18.299118 containerd[1593]: 2026-01-24 00:58:18.263 [INFO][4865] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" HandleID="k8s-pod-network.9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:18.300363 containerd[1593]: 2026-01-24 00:58:18.267 [INFO][4836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gdj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7b68612-f671-4faf-9c72-eb6b0593666c", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7gdj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25b0ff5534d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:18.300363 containerd[1593]: 2026-01-24 00:58:18.267 [INFO][4836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:18.300363 containerd[1593]: 2026-01-24 00:58:18.267 [INFO][4836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25b0ff5534d ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:18.300363 containerd[1593]: 2026-01-24 00:58:18.278 [INFO][4836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:18.300363 containerd[1593]: 2026-01-24 00:58:18.280 [INFO][4836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gdj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7b68612-f671-4faf-9c72-eb6b0593666c", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38", Pod:"csi-node-driver-7gdj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25b0ff5534d", MAC:"de:62:c1:3e:99:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:18.300363 containerd[1593]: 2026-01-24 00:58:18.294 [INFO][4836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38" Namespace="calico-system" Pod="csi-node-driver-7gdj7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:18.360035 containerd[1593]: time="2026-01-24T00:58:18.358858065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:18.360035 containerd[1593]: time="2026-01-24T00:58:18.359428597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:18.360035 containerd[1593]: time="2026-01-24T00:58:18.359444699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:18.360193 containerd[1593]: time="2026-01-24T00:58:18.359905969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:18.406389 systemd-networkd[1258]: cali05e98b47117: Link UP Jan 24 00:58:18.409201 systemd-networkd[1258]: cali05e98b47117: Gained carrier Jan 24 00:58:18.438590 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.132 [INFO][4851] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.148 [INFO][4851] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0 calico-kube-controllers-5bb8c95968- calico-system 3c0ff763-a567-4414-bf09-8f7990c6e756 1034 0 2026-01-24 00:57:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bb8c95968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bb8c95968-8n99c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali05e98b47117 [] [] }} ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.148 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.228 [INFO][4874] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" HandleID="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.229 [INFO][4874] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" HandleID="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000532b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bb8c95968-8n99c", "timestamp":"2026-01-24 00:58:18.228695988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.229 [INFO][4874] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.263 [INFO][4874] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.263 [INFO][4874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.342 [INFO][4874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.354 [INFO][4874] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.364 [INFO][4874] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.368 [INFO][4874] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.372 [INFO][4874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.372 [INFO][4874] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.375 [INFO][4874] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626 Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.381 [INFO][4874] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.392 [INFO][4874] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.392 [INFO][4874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" host="localhost" Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.392 [INFO][4874] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:18.439209 containerd[1593]: 2026-01-24 00:58:18.392 [INFO][4874] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" HandleID="k8s-pod-network.e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:18.440028 containerd[1593]: 2026-01-24 00:58:18.402 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0", GenerateName:"calico-kube-controllers-5bb8c95968-", Namespace:"calico-system", SelfLink:"", UID:"3c0ff763-a567-4414-bf09-8f7990c6e756", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8c95968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bb8c95968-8n99c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05e98b47117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:18.440028 containerd[1593]: 2026-01-24 00:58:18.402 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:18.440028 containerd[1593]: 2026-01-24 00:58:18.402 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05e98b47117 ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:18.440028 containerd[1593]: 2026-01-24 00:58:18.406 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:18.440028 containerd[1593]: 2026-01-24 00:58:18.407 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0", GenerateName:"calico-kube-controllers-5bb8c95968-", Namespace:"calico-system", SelfLink:"", UID:"3c0ff763-a567-4414-bf09-8f7990c6e756", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8c95968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626", Pod:"calico-kube-controllers-5bb8c95968-8n99c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05e98b47117", MAC:"d6:68:8a:f0:7a:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:18.440028 containerd[1593]: 2026-01-24 00:58:18.429 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626" Namespace="calico-system" Pod="calico-kube-controllers-5bb8c95968-8n99c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:18.467946 containerd[1593]: time="2026-01-24T00:58:18.467438376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gdj7,Uid:b7b68612-f671-4faf-9c72-eb6b0593666c,Namespace:calico-system,Attempt:1,} returns sandbox id \"9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38\"" Jan 24 00:58:18.474348 containerd[1593]: time="2026-01-24T00:58:18.473347368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:58:18.489865 containerd[1593]: time="2026-01-24T00:58:18.489179021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:18.489865 containerd[1593]: time="2026-01-24T00:58:18.489409835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:18.489865 containerd[1593]: time="2026-01-24T00:58:18.489427711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:18.489865 containerd[1593]: time="2026-01-24T00:58:18.489592254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:18.532174 containerd[1593]: time="2026-01-24T00:58:18.531973192Z" level=info msg="StopPodSandbox for \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\"" Jan 24 00:58:18.543753 containerd[1593]: time="2026-01-24T00:58:18.543561530Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:18.545204 containerd[1593]: time="2026-01-24T00:58:18.545014539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:58:18.545204 containerd[1593]: time="2026-01-24T00:58:18.545150954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:58:18.545852 kubelet[2664]: E0124 00:58:18.545460 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:18.545852 kubelet[2664]: E0124 00:58:18.545555 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:18.545852 kubelet[2664]: E0124 00:58:18.545819 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwx2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:18.549470 containerd[1593]: time="2026-01-24T00:58:18.549413778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:58:18.552710 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:18.598632 systemd-networkd[1258]: cali6a6a76a2a47: Gained IPv6LL Jan 24 00:58:18.622140 containerd[1593]: time="2026-01-24T00:58:18.621879917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:18.628909 containerd[1593]: time="2026-01-24T00:58:18.628867201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8c95968-8n99c,Uid:3c0ff763-a567-4414-bf09-8f7990c6e756,Namespace:calico-system,Attempt:1,} returns sandbox id \"e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626\"" Jan 24 00:58:18.633565 containerd[1593]: time="2026-01-24T00:58:18.629812208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:58:18.634469 kubelet[2664]: E0124 00:58:18.634444 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:18.634826 kubelet[2664]: E0124 00:58:18.634808 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:18.635428 kubelet[2664]: E0124 00:58:18.635367 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwx2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:18.636798 containerd[1593]: time="2026-01-24T00:58:18.629818160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:58:18.640353 kubelet[2664]: E0124 00:58:18.638351 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:18.648947 containerd[1593]: time="2026-01-24T00:58:18.648558147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:58:18.732087 containerd[1593]: time="2026-01-24T00:58:18.731943470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:18.735357 containerd[1593]: time="2026-01-24T00:58:18.733923651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:58:18.735357 containerd[1593]: time="2026-01-24T00:58:18.733999685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:18.736643 kubelet[2664]: E0124 00:58:18.735690 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:18.736643 kubelet[2664]: E0124 00:58:18.735738 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:18.736643 kubelet[2664]: E0124 00:58:18.735846 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hntzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bb8c95968-8n99c_calico-system(3c0ff763-a567-4414-bf09-8f7990c6e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:18.737924 kubelet[2664]: E0124 00:58:18.737859 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.682 [INFO][4994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.682 [INFO][4994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" iface="eth0" netns="/var/run/netns/cni-ead2d09e-0ee6-260a-4db3-480eca890e31" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.683 [INFO][4994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" iface="eth0" netns="/var/run/netns/cni-ead2d09e-0ee6-260a-4db3-480eca890e31" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.684 [INFO][4994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" iface="eth0" netns="/var/run/netns/cni-ead2d09e-0ee6-260a-4db3-480eca890e31" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.684 [INFO][4994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.684 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.732 [INFO][5023] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.740 [INFO][5023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.740 [INFO][5023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.753 [WARNING][5023] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.753 [INFO][5023] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.757 [INFO][5023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:18.770402 containerd[1593]: 2026-01-24 00:58:18.765 [INFO][4994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:18.770402 containerd[1593]: time="2026-01-24T00:58:18.770074620Z" level=info msg="TearDown network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\" successfully" Jan 24 00:58:18.770402 containerd[1593]: time="2026-01-24T00:58:18.770099862Z" level=info msg="StopPodSandbox for \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\" returns successfully" Jan 24 00:58:18.771938 containerd[1593]: time="2026-01-24T00:58:18.771584194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrk6w,Uid:eb625f4a-3376-469f-90ac-91f293666e81,Namespace:calico-system,Attempt:1,}" Jan 24 00:58:18.860044 systemd[1]: run-netns-cni\x2dead2d09e\x2d0ee6\x2d260a\x2d4db3\x2d480eca890e31.mount: Deactivated successfully. Jan 24 00:58:19.000040 systemd-networkd[1258]: cali28ef33cb396: Link UP Jan 24 00:58:19.002193 systemd-networkd[1258]: cali28ef33cb396: Gained carrier Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.864 [INFO][5034] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.882 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--wrk6w-eth0 goldmane-666569f655- calico-system eb625f4a-3376-469f-90ac-91f293666e81 1075 0 2026-01-24 00:57:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-wrk6w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali28ef33cb396 [] [] }} ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.882 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.941 [INFO][5049] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" HandleID="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.941 [INFO][5049] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" HandleID="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-wrk6w", "timestamp":"2026-01-24 00:58:18.941422988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.941 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.941 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.941 [INFO][5049] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.950 [INFO][5049] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.957 [INFO][5049] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.967 [INFO][5049] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.970 [INFO][5049] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.974 [INFO][5049] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.974 [INFO][5049] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.977 [INFO][5049] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159 Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.981 [INFO][5049] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.990 [INFO][5049] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.990 [INFO][5049] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" host="localhost" Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.990 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:19.021054 containerd[1593]: 2026-01-24 00:58:18.990 [INFO][5049] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" HandleID="k8s-pod-network.b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:19.022716 containerd[1593]: 2026-01-24 00:58:18.993 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wrk6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"eb625f4a-3376-469f-90ac-91f293666e81", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-wrk6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28ef33cb396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:19.022716 containerd[1593]: 2026-01-24 00:58:18.994 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:19.022716 containerd[1593]: 2026-01-24 00:58:18.994 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28ef33cb396 ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:19.022716 containerd[1593]: 2026-01-24 00:58:18.998 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:19.022716 containerd[1593]: 2026-01-24 00:58:18.999 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wrk6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"eb625f4a-3376-469f-90ac-91f293666e81", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159", Pod:"goldmane-666569f655-wrk6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28ef33cb396", MAC:"fe:07:a3:a1:89:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:19.022716 containerd[1593]: 2026-01-24 00:58:19.016 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159" Namespace="calico-system" Pod="goldmane-666569f655-wrk6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:19.028490 kubelet[2664]: E0124 00:58:19.028129 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:19.032635 kubelet[2664]: E0124 00:58:19.032519 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:19.034429 kubelet[2664]: E0124 00:58:19.032944 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:19.034429 kubelet[2664]: E0124 00:58:19.033720 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" podUID="0b98a207-ae40-4df7-81ed-b24949ca269a" Jan 24 00:58:19.035812 kubelet[2664]: E0124 00:58:19.034916 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:19.069050 containerd[1593]: time="2026-01-24T00:58:19.068406396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:58:19.069050 containerd[1593]: time="2026-01-24T00:58:19.068672362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:58:19.069050 containerd[1593]: time="2026-01-24T00:58:19.068694576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:19.069050 containerd[1593]: time="2026-01-24T00:58:19.068841512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:58:19.110710 systemd-networkd[1258]: calia055eb304fb: Gained IPv6LL Jan 24 00:58:19.134066 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:58:19.181437 containerd[1593]: time="2026-01-24T00:58:19.181391002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrk6w,Uid:eb625f4a-3376-469f-90ac-91f293666e81,Namespace:calico-system,Attempt:1,} returns sandbox id \"b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159\"" Jan 24 00:58:19.183654 containerd[1593]: time="2026-01-24T00:58:19.183388170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:58:19.256407 containerd[1593]: time="2026-01-24T00:58:19.255979045Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:19.257874 containerd[1593]: time="2026-01-24T00:58:19.257707123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:58:19.257874 containerd[1593]: time="2026-01-24T00:58:19.257828197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:19.258064 kubelet[2664]: E0124 00:58:19.257970 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:19.258064 kubelet[2664]: E0124 00:58:19.258025 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:19.258386 kubelet[2664]: E0124 00:58:19.258141 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hs5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wrk6w_calico-system(eb625f4a-3376-469f-90ac-91f293666e81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:19.259743 kubelet[2664]: E0124 00:58:19.259467 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:19.625372 systemd-networkd[1258]: cali25b0ff5534d: Gained IPv6LL Jan 24 00:58:20.040120 kubelet[2664]: E0124 00:58:20.039933 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:20.042375 kubelet[2664]: E0124 00:58:20.040496 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:20.042375 kubelet[2664]: E0124 00:58:20.040644 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:20.042375 kubelet[2664]: E0124 00:58:20.040915 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:20.199680 systemd-networkd[1258]: cali05e98b47117: Gained IPv6LL Jan 24 00:58:20.969036 systemd-networkd[1258]: cali28ef33cb396: Gained IPv6LL Jan 24 00:58:21.044442 kubelet[2664]: E0124 00:58:21.043839 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:21.114975 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:34338.service - OpenSSH per-connection server daemon (10.0.0.1:34338). Jan 24 00:58:21.187467 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 34338 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:21.190752 sshd[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:21.199793 systemd-logind[1562]: New session 9 of user core. Jan 24 00:58:21.207986 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:58:21.400199 sshd[5164]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:21.406073 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:34338.service: Deactivated successfully. Jan 24 00:58:21.412103 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:58:21.413718 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:58:21.417680 systemd-logind[1562]: Removed session 9. Jan 24 00:58:22.458138 kubelet[2664]: I0124 00:58:22.457971 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:58:22.459486 kubelet[2664]: E0124 00:58:22.458457 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:23.048428 kubelet[2664]: E0124 00:58:23.048036 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:23.556406 kernel: bpftool[5276]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:58:23.874677 systemd-networkd[1258]: vxlan.calico: Link UP Jan 24 00:58:23.874686 systemd-networkd[1258]: vxlan.calico: Gained carrier Jan 24 00:58:24.527933 containerd[1593]: time="2026-01-24T00:58:24.526185121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:24.598169 containerd[1593]: time="2026-01-24T00:58:24.598102571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:24.600024 containerd[1593]: time="2026-01-24T00:58:24.599883267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:24.600024 containerd[1593]: time="2026-01-24T00:58:24.599945693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:58:24.600649 kubelet[2664]: E0124 00:58:24.600494 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:24.600649 kubelet[2664]: E0124 00:58:24.600554 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:24.601056 kubelet[2664]: E0124 00:58:24.600674 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f89e7f4884e54659bfd281d3a9e6919f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwfc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-689f665f8b-hwn7z_calico-system(c47f04db-70a5-460c-8f3b-ca0ed30f8b3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:24.603901 containerd[1593]: time="2026-01-24T00:58:24.603527864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:24.666619 containerd[1593]: time="2026-01-24T00:58:24.666511962Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:24.668132 containerd[1593]: time="2026-01-24T00:58:24.668040458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:24.668132 containerd[1593]: time="2026-01-24T00:58:24.668083894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:24.668328 kubelet[2664]: E0124 00:58:24.668297 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:24.668539 kubelet[2664]: E0124 00:58:24.668469 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:24.668929 kubelet[2664]: E0124 00:58:24.668781 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwfc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-689f665f8b-hwn7z_calico-system(c47f04db-70a5-460c-8f3b-ca0ed30f8b3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:24.670482 kubelet[2664]: E0124 00:58:24.670360 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-689f665f8b-hwn7z" podUID="c47f04db-70a5-460c-8f3b-ca0ed30f8b3a" Jan 24 00:58:25.511790 systemd-networkd[1258]: vxlan.calico: Gained IPv6LL Jan 24 00:58:26.413567 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:59670.service - OpenSSH per-connection server daemon (10.0.0.1:59670). Jan 24 00:58:26.481768 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 59670 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:26.484435 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:26.492416 systemd-logind[1562]: New session 10 of user core. Jan 24 00:58:26.501945 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:58:26.700112 sshd[5363]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:26.709765 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:59674.service - OpenSSH per-connection server daemon (10.0.0.1:59674). Jan 24 00:58:26.710414 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:59670.service: Deactivated successfully. Jan 24 00:58:26.715405 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:58:26.723010 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:58:26.725469 systemd-logind[1562]: Removed session 10. Jan 24 00:58:26.762184 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 59674 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:26.765055 sshd[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:26.773791 systemd-logind[1562]: New session 11 of user core. Jan 24 00:58:26.787768 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:58:27.008457 sshd[5376]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:27.018864 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:59676.service - OpenSSH per-connection server daemon (10.0.0.1:59676). Jan 24 00:58:27.019490 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:59674.service: Deactivated successfully. Jan 24 00:58:27.031621 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:58:27.036608 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:58:27.043473 systemd-logind[1562]: Removed session 11. Jan 24 00:58:27.072087 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 59676 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:27.074100 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:27.081081 systemd-logind[1562]: New session 12 of user core. Jan 24 00:58:27.086695 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:58:27.242500 sshd[5389]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:27.247909 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:59676.service: Deactivated successfully. Jan 24 00:58:27.251420 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:58:27.251874 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:58:27.255990 systemd-logind[1562]: Removed session 12. Jan 24 00:58:28.525702 containerd[1593]: time="2026-01-24T00:58:28.525449713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:28.707352 containerd[1593]: time="2026-01-24T00:58:28.707115353Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:28.709415 containerd[1593]: time="2026-01-24T00:58:28.709163497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:28.709415 containerd[1593]: time="2026-01-24T00:58:28.709324085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:28.709803 kubelet[2664]: E0124 00:58:28.709710 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:28.711181 kubelet[2664]: E0124 00:58:28.709795 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:28.711181 kubelet[2664]: E0124 00:58:28.710081 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5dh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-89796fd66-7lkh6_calico-apiserver(0f4fb867-3038-43f2-9206-1156c12b1931): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:28.712150 kubelet[2664]: E0124 00:58:28.712109 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:31.526222 containerd[1593]: time="2026-01-24T00:58:31.525908396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:58:31.641220 containerd[1593]: time="2026-01-24T00:58:31.641014735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:31.643226 containerd[1593]: time="2026-01-24T00:58:31.643096367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:58:31.643585 containerd[1593]: time="2026-01-24T00:58:31.643444950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:58:31.644777 kubelet[2664]: E0124 00:58:31.643867 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:31.644777 kubelet[2664]: E0124 00:58:31.643966 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:31.644777 kubelet[2664]: E0124 00:58:31.644068 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwx2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:31.649944 containerd[1593]: time="2026-01-24T00:58:31.649762571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:58:31.730585 containerd[1593]: time="2026-01-24T00:58:31.730425940Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:31.732829 containerd[1593]: time="2026-01-24T00:58:31.732716922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:58:31.732829 containerd[1593]: time="2026-01-24T00:58:31.732819215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:58:31.733304 kubelet[2664]: E0124 00:58:31.733181 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:31.733664 kubelet[2664]: E0124 00:58:31.733581 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:31.734064 kubelet[2664]: E0124 00:58:31.733854 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwx2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:31.736047 kubelet[2664]: E0124 00:58:31.735998 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:32.257893 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:59682.service - OpenSSH per-connection server daemon (10.0.0.1:59682). Jan 24 00:58:32.294763 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 59682 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:32.296695 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:32.304201 systemd-logind[1562]: New session 13 of user core. Jan 24 00:58:32.309755 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:58:32.468030 sshd[5420]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:32.474643 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:59682.service: Deactivated successfully. Jan 24 00:58:32.481189 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:58:32.481848 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:58:32.484719 systemd-logind[1562]: Removed session 13. Jan 24 00:58:33.526525 containerd[1593]: time="2026-01-24T00:58:33.526106503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:58:33.595869 containerd[1593]: time="2026-01-24T00:58:33.595534482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:33.597738 containerd[1593]: time="2026-01-24T00:58:33.597466582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:58:33.597738 containerd[1593]: time="2026-01-24T00:58:33.597506449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:33.597921 kubelet[2664]: E0124 00:58:33.597783 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:33.597921 kubelet[2664]: E0124 00:58:33.597828 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:33.598461 kubelet[2664]: E0124 00:58:33.597931 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hntzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bb8c95968-8n99c_calico-system(3c0ff763-a567-4414-bf09-8f7990c6e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:33.599749 kubelet[2664]: E0124 00:58:33.599365 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:34.477792 containerd[1593]: time="2026-01-24T00:58:34.477599447Z" level=info msg="StopPodSandbox for \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\"" Jan 24 00:58:34.529513 containerd[1593]: time="2026-01-24T00:58:34.529472246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:34.616767 containerd[1593]: time="2026-01-24T00:58:34.616621391Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:34.618779 containerd[1593]: time="2026-01-24T00:58:34.618590411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:34.618836 containerd[1593]: time="2026-01-24T00:58:34.618807752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:34.618992 kubelet[2664]: E0124 00:58:34.618942 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:34.619486 kubelet[2664]: E0124 00:58:34.618999 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:34.619486 kubelet[2664]: E0124 00:58:34.619205 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw8pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-89796fd66-rbmf5_calico-apiserver(0b98a207-ae40-4df7-81ed-b24949ca269a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:34.620918 kubelet[2664]: E0124 00:58:34.620828 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" podUID="0b98a207-ae40-4df7-81ed-b24949ca269a" Jan 24 00:58:34.621921 containerd[1593]: time="2026-01-24T00:58:34.621522540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.568 [WARNING][5453] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gdj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7b68612-f671-4faf-9c72-eb6b0593666c", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38", Pod:"csi-node-driver-7gdj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25b0ff5534d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.573 [INFO][5453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.573 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" iface="eth0" netns="" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.573 [INFO][5453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.574 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.634 [INFO][5464] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.634 [INFO][5464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.634 [INFO][5464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.644 [WARNING][5464] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.644 [INFO][5464] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.646 [INFO][5464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:34.653146 containerd[1593]: 2026-01-24 00:58:34.650 [INFO][5453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.654218 containerd[1593]: time="2026-01-24T00:58:34.653852398Z" level=info msg="TearDown network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\" successfully" Jan 24 00:58:34.654218 containerd[1593]: time="2026-01-24T00:58:34.653926094Z" level=info msg="StopPodSandbox for \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\" returns successfully" Jan 24 00:58:34.654883 containerd[1593]: time="2026-01-24T00:58:34.654696741Z" level=info msg="RemovePodSandbox for \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\"" Jan 24 00:58:34.657924 containerd[1593]: time="2026-01-24T00:58:34.657849485Z" level=info msg="Forcibly stopping sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\"" Jan 24 00:58:34.685405 containerd[1593]: time="2026-01-24T00:58:34.684595120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:34.686901 containerd[1593]: time="2026-01-24T00:58:34.686787050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:58:34.686967 containerd[1593]: time="2026-01-24T00:58:34.686910715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:34.687492 kubelet[2664]: E0124 00:58:34.687159 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:34.687624 kubelet[2664]: E0124 00:58:34.687568 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:34.687965 kubelet[2664]: E0124 00:58:34.687703 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hs5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wrk6w_calico-system(eb625f4a-3376-469f-90ac-91f293666e81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:34.689467 kubelet[2664]: E0124 00:58:34.689379 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.737 [WARNING][5481] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gdj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7b68612-f671-4faf-9c72-eb6b0593666c", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fda9183ff6267e3fe615a093dfdb00c2dd67d74d990812857ccba8e1a988e38", Pod:"csi-node-driver-7gdj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25b0ff5534d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.738 [INFO][5481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.738 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" iface="eth0" netns="" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.738 [INFO][5481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.738 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.776 [INFO][5489] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.777 [INFO][5489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.777 [INFO][5489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.787 [WARNING][5489] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.787 [INFO][5489] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" HandleID="k8s-pod-network.400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Workload="localhost-k8s-csi--node--driver--7gdj7-eth0" Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.790 [INFO][5489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:34.798066 containerd[1593]: 2026-01-24 00:58:34.794 [INFO][5481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a" Jan 24 00:58:34.799398 containerd[1593]: time="2026-01-24T00:58:34.798680424Z" level=info msg="TearDown network for sandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\" successfully" Jan 24 00:58:34.805062 containerd[1593]: time="2026-01-24T00:58:34.804955497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:34.805062 containerd[1593]: time="2026-01-24T00:58:34.805017480Z" level=info msg="RemovePodSandbox \"400b69ba39150eec747f00c7644d4f7cef336aef2c6c789fe1d1add32d3a4e1a\" returns successfully" Jan 24 00:58:34.806641 containerd[1593]: time="2026-01-24T00:58:34.806594061Z" level=info msg="StopPodSandbox for \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\"" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.891 [WARNING][5507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0", GenerateName:"calico-kube-controllers-5bb8c95968-", Namespace:"calico-system", SelfLink:"", UID:"3c0ff763-a567-4414-bf09-8f7990c6e756", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8c95968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626", Pod:"calico-kube-controllers-5bb8c95968-8n99c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05e98b47117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.891 [INFO][5507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.891 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" iface="eth0" netns="" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.891 [INFO][5507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.891 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.936 [INFO][5516] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.937 [INFO][5516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.937 [INFO][5516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.950 [WARNING][5516] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.950 [INFO][5516] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.952 [INFO][5516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:34.959413 containerd[1593]: 2026-01-24 00:58:34.956 [INFO][5507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:34.959413 containerd[1593]: time="2026-01-24T00:58:34.959222511Z" level=info msg="TearDown network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\" successfully" Jan 24 00:58:34.959413 containerd[1593]: time="2026-01-24T00:58:34.959376425Z" level=info msg="StopPodSandbox for \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\" returns successfully" Jan 24 00:58:34.959969 containerd[1593]: time="2026-01-24T00:58:34.959896773Z" level=info msg="RemovePodSandbox for \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\"" Jan 24 00:58:34.959969 containerd[1593]: time="2026-01-24T00:58:34.959923456Z" level=info msg="Forcibly stopping sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\"" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.026 [WARNING][5534] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0", GenerateName:"calico-kube-controllers-5bb8c95968-", Namespace:"calico-system", SelfLink:"", UID:"3c0ff763-a567-4414-bf09-8f7990c6e756", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8c95968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1eaf44d4cc2ae7250e7d71a7f87ef6a41b2ff26510c386c1fa2edf790cb6626", Pod:"calico-kube-controllers-5bb8c95968-8n99c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05e98b47117", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.026 [INFO][5534] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.026 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" iface="eth0" netns="" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.026 [INFO][5534] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.026 [INFO][5534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.074 [INFO][5543] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.074 [INFO][5543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.074 [INFO][5543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.088 [WARNING][5543] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.088 [INFO][5543] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" HandleID="k8s-pod-network.0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Workload="localhost-k8s-calico--kube--controllers--5bb8c95968--8n99c-eth0" Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.092 [INFO][5543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:35.100123 containerd[1593]: 2026-01-24 00:58:35.096 [INFO][5534] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187" Jan 24 00:58:35.100639 containerd[1593]: time="2026-01-24T00:58:35.100106064Z" level=info msg="TearDown network for sandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\" successfully" Jan 24 00:58:35.107276 containerd[1593]: time="2026-01-24T00:58:35.107040651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:35.107276 containerd[1593]: time="2026-01-24T00:58:35.107145538Z" level=info msg="RemovePodSandbox \"0a91d979a3a10f1f14acf2faed2642260cbcb744055e581c9a93959550ef7187\" returns successfully" Jan 24 00:58:35.108357 containerd[1593]: time="2026-01-24T00:58:35.108092086Z" level=info msg="StopPodSandbox for \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\"" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.188 [WARNING][5561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4fb867-3038-43f2-9206-1156c12b1931", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7", Pod:"calico-apiserver-89796fd66-7lkh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12f9114989b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.188 [INFO][5561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.189 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" iface="eth0" netns="" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.189 [INFO][5561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.189 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.236 [INFO][5570] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.236 [INFO][5570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.236 [INFO][5570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.245 [WARNING][5570] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.245 [INFO][5570] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.248 [INFO][5570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:35.255050 containerd[1593]: 2026-01-24 00:58:35.251 [INFO][5561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.255050 containerd[1593]: time="2026-01-24T00:58:35.254654785Z" level=info msg="TearDown network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\" successfully" Jan 24 00:58:35.255050 containerd[1593]: time="2026-01-24T00:58:35.254687008Z" level=info msg="StopPodSandbox for \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\" returns successfully" Jan 24 00:58:35.256084 containerd[1593]: time="2026-01-24T00:58:35.255553472Z" level=info msg="RemovePodSandbox for \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\"" Jan 24 00:58:35.256084 containerd[1593]: time="2026-01-24T00:58:35.255584042Z" level=info msg="Forcibly stopping sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\"" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.324 [WARNING][5587] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4fb867-3038-43f2-9206-1156c12b1931", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8aeafa81cb3e148aaa64d8e1491a43e0cb2a9c561795e0b0754aa5813edbdce7", Pod:"calico-apiserver-89796fd66-7lkh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12f9114989b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.324 [INFO][5587] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.324 [INFO][5587] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" iface="eth0" netns="" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.324 [INFO][5587] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.325 [INFO][5587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.371 [INFO][5596] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.372 [INFO][5596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.372 [INFO][5596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.381 [WARNING][5596] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.381 [INFO][5596] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" HandleID="k8s-pod-network.e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Workload="localhost-k8s-calico--apiserver--89796fd66--7lkh6-eth0" Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.383 [INFO][5596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:35.389547 containerd[1593]: 2026-01-24 00:58:35.386 [INFO][5587] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b" Jan 24 00:58:35.389547 containerd[1593]: time="2026-01-24T00:58:35.389490223Z" level=info msg="TearDown network for sandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\" successfully" Jan 24 00:58:35.397205 containerd[1593]: time="2026-01-24T00:58:35.397133405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:35.397342 containerd[1593]: time="2026-01-24T00:58:35.397220417Z" level=info msg="RemovePodSandbox \"e543a254bb70531846774456055029d6c872f15399b8837ae6ecd9ecbfd32f1b\" returns successfully" Jan 24 00:58:35.398393 containerd[1593]: time="2026-01-24T00:58:35.398358470Z" level=info msg="StopPodSandbox for \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\"" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.452 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3989f243-b175-4180-9b60-b4d8b86d76d7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106", Pod:"coredns-668d6bf9bc-s9c2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia055eb304fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.453 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.453 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" iface="eth0" netns="" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.453 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.453 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.501 [INFO][5620] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.501 [INFO][5620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.501 [INFO][5620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.508 [WARNING][5620] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.508 [INFO][5620] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.511 [INFO][5620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:35.518157 containerd[1593]: 2026-01-24 00:58:35.515 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.518738 containerd[1593]: time="2026-01-24T00:58:35.518201904Z" level=info msg="TearDown network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\" successfully" Jan 24 00:58:35.518738 containerd[1593]: time="2026-01-24T00:58:35.518232985Z" level=info msg="StopPodSandbox for \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\" returns successfully" Jan 24 00:58:35.519450 containerd[1593]: time="2026-01-24T00:58:35.519385607Z" level=info msg="RemovePodSandbox for \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\"" Jan 24 00:58:35.519492 containerd[1593]: time="2026-01-24T00:58:35.519452110Z" level=info msg="Forcibly stopping sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\"" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.624 [WARNING][5639] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3989f243-b175-4180-9b60-b4d8b86d76d7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92cbdcafca95669d5cfcc2ec0ea17c6b1eea4d4a8f156f5acda4b4eb16e7a106", Pod:"coredns-668d6bf9bc-s9c2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia055eb304fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.625 [INFO][5639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.625 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" iface="eth0" netns="" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.625 [INFO][5639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.625 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.682 [INFO][5648] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.682 [INFO][5648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.682 [INFO][5648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.692 [WARNING][5648] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.692 [INFO][5648] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" HandleID="k8s-pod-network.b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Workload="localhost-k8s-coredns--668d6bf9bc--s9c2b-eth0" Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.695 [INFO][5648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:35.702009 containerd[1593]: 2026-01-24 00:58:35.698 [INFO][5639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb" Jan 24 00:58:35.702009 containerd[1593]: time="2026-01-24T00:58:35.701796877Z" level=info msg="TearDown network for sandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\" successfully" Jan 24 00:58:35.706548 containerd[1593]: time="2026-01-24T00:58:35.706347367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:35.706548 containerd[1593]: time="2026-01-24T00:58:35.706457215Z" level=info msg="RemovePodSandbox \"b86d5303dfe888535ac60ef8f3d06b64a9854bdb17053d1f24e9b0a27ab332cb\" returns successfully" Jan 24 00:58:35.707411 containerd[1593]: time="2026-01-24T00:58:35.707201147Z" level=info msg="StopPodSandbox for \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\"" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.785 [WARNING][5667] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wrk6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"eb625f4a-3376-469f-90ac-91f293666e81", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159", Pod:"goldmane-666569f655-wrk6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28ef33cb396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.787 [INFO][5667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.787 [INFO][5667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" iface="eth0" netns="" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.787 [INFO][5667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.787 [INFO][5667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.833 [INFO][5675] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.833 [INFO][5675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.833 [INFO][5675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.845 [WARNING][5675] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.845 [INFO][5675] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.849 [INFO][5675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:35.855767 containerd[1593]: 2026-01-24 00:58:35.852 [INFO][5667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:35.855767 containerd[1593]: time="2026-01-24T00:58:35.855760410Z" level=info msg="TearDown network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\" successfully" Jan 24 00:58:35.855767 containerd[1593]: time="2026-01-24T00:58:35.855785860Z" level=info msg="StopPodSandbox for \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\" returns successfully" Jan 24 00:58:35.856671 containerd[1593]: time="2026-01-24T00:58:35.856578131Z" level=info msg="RemovePodSandbox for \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\"" Jan 24 00:58:35.856671 containerd[1593]: time="2026-01-24T00:58:35.856658300Z" level=info msg="Forcibly stopping sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\"" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.944 [WARNING][5692] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wrk6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"eb625f4a-3376-469f-90ac-91f293666e81", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3982838f58a4ffe5b0d4d20ad8fccc194e3c315d1e29ab5cb51a0bcd6178159", Pod:"goldmane-666569f655-wrk6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28ef33cb396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.945 [INFO][5692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.945 [INFO][5692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" iface="eth0" netns="" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.945 [INFO][5692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.945 [INFO][5692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.985 [INFO][5700] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.986 [INFO][5700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.986 [INFO][5700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.996 [WARNING][5700] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.996 [INFO][5700] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" HandleID="k8s-pod-network.3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Workload="localhost-k8s-goldmane--666569f655--wrk6w-eth0" Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:35.999 [INFO][5700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:36.006407 containerd[1593]: 2026-01-24 00:58:36.002 [INFO][5692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a" Jan 24 00:58:36.006407 containerd[1593]: time="2026-01-24T00:58:36.005837667Z" level=info msg="TearDown network for sandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\" successfully" Jan 24 00:58:36.013696 containerd[1593]: time="2026-01-24T00:58:36.013606350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:36.013768 containerd[1593]: time="2026-01-24T00:58:36.013700556Z" level=info msg="RemovePodSandbox \"3e2df2a28f600661492b920e5ad7aee16f2c7f3efc9de7509dcd64ce30b75c6a\" returns successfully" Jan 24 00:58:36.014649 containerd[1593]: time="2026-01-24T00:58:36.014573583Z" level=info msg="StopPodSandbox for \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\"" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.085 [WARNING][5717] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4d65b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a40b7ad0-87c5-48cf-aae6-708b12427df9", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df", Pod:"coredns-668d6bf9bc-4d65b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abcc704b75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.086 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.086 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" iface="eth0" netns="" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.086 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.086 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.143 [INFO][5725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.144 [INFO][5725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.144 [INFO][5725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.154 [WARNING][5725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.154 [INFO][5725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.157 [INFO][5725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:36.164788 containerd[1593]: 2026-01-24 00:58:36.160 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.165618 containerd[1593]: time="2026-01-24T00:58:36.164812544Z" level=info msg="TearDown network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\" successfully" Jan 24 00:58:36.165618 containerd[1593]: time="2026-01-24T00:58:36.164844778Z" level=info msg="StopPodSandbox for \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\" returns successfully" Jan 24 00:58:36.167147 containerd[1593]: time="2026-01-24T00:58:36.166627784Z" level=info msg="RemovePodSandbox for \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\"" Jan 24 00:58:36.167147 containerd[1593]: time="2026-01-24T00:58:36.166789714Z" level=info msg="Forcibly stopping sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\"" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.237 [WARNING][5741] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4d65b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a40b7ad0-87c5-48cf-aae6-708b12427df9", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e28af95191235e385f3f49578d7d9875bb459e0876b2c4def1b07e01bfc08df", Pod:"coredns-668d6bf9bc-4d65b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abcc704b75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.237 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.237 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" iface="eth0" netns="" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.237 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.237 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.270 [INFO][5750] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.270 [INFO][5750] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.270 [INFO][5750] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.278 [WARNING][5750] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.278 [INFO][5750] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" HandleID="k8s-pod-network.0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Workload="localhost-k8s-coredns--668d6bf9bc--4d65b-eth0" Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.280 [INFO][5750] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:36.286325 containerd[1593]: 2026-01-24 00:58:36.283 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5" Jan 24 00:58:36.286325 containerd[1593]: time="2026-01-24T00:58:36.286156989Z" level=info msg="TearDown network for sandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\" successfully" Jan 24 00:58:36.291678 containerd[1593]: time="2026-01-24T00:58:36.291532054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:36.291678 containerd[1593]: time="2026-01-24T00:58:36.291626701Z" level=info msg="RemovePodSandbox \"0ce334fabe54231eba20d38adb37f7dc0dba97cbf4e1143efb0c674ecd2f62c5\" returns successfully" Jan 24 00:58:36.292331 containerd[1593]: time="2026-01-24T00:58:36.292311042Z" level=info msg="StopPodSandbox for \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\"" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.344 [WARNING][5768] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" WorkloadEndpoint="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.345 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.345 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" iface="eth0" netns="" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.345 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.345 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.373 [INFO][5777] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.374 [INFO][5777] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.374 [INFO][5777] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.382 [WARNING][5777] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.382 [INFO][5777] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.384 [INFO][5777] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:36.390645 containerd[1593]: 2026-01-24 00:58:36.387 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.391418 containerd[1593]: time="2026-01-24T00:58:36.390682539Z" level=info msg="TearDown network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\" successfully" Jan 24 00:58:36.391418 containerd[1593]: time="2026-01-24T00:58:36.390717619Z" level=info msg="StopPodSandbox for \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\" returns successfully" Jan 24 00:58:36.391539 containerd[1593]: time="2026-01-24T00:58:36.391428945Z" level=info msg="RemovePodSandbox for \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\"" Jan 24 00:58:36.391539 containerd[1593]: time="2026-01-24T00:58:36.391458905Z" level=info msg="Forcibly stopping sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\"" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.455 [WARNING][5794] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" WorkloadEndpoint="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.455 [INFO][5794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.455 [INFO][5794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" iface="eth0" netns="" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.455 [INFO][5794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.455 [INFO][5794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.498 [INFO][5803] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.498 [INFO][5803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.498 [INFO][5803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.508 [WARNING][5803] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.508 [INFO][5803] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" HandleID="k8s-pod-network.7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Workload="localhost-k8s-whisker--9db55f7d4--l9kdm-eth0" Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.511 [INFO][5803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:36.517536 containerd[1593]: 2026-01-24 00:58:36.514 [INFO][5794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b" Jan 24 00:58:36.517916 containerd[1593]: time="2026-01-24T00:58:36.517579598Z" level=info msg="TearDown network for sandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\" successfully" Jan 24 00:58:36.521979 containerd[1593]: time="2026-01-24T00:58:36.521867122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:36.522047 containerd[1593]: time="2026-01-24T00:58:36.521988121Z" level=info msg="RemovePodSandbox \"7d21f174c1bba6ab109cab1cbba984bf0b2551db186e9e5fb2fc25075404f89b\" returns successfully" Jan 24 00:58:36.523094 containerd[1593]: time="2026-01-24T00:58:36.522708146Z" level=info msg="StopPodSandbox for \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\"" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.574 [WARNING][5820] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b98a207-ae40-4df7-81ed-b24949ca269a", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24", Pod:"calico-apiserver-89796fd66-rbmf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a6a76a2a47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.575 [INFO][5820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.575 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" iface="eth0" netns="" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.575 [INFO][5820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.575 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.608 [INFO][5829] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.609 [INFO][5829] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.609 [INFO][5829] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.616 [WARNING][5829] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.616 [INFO][5829] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.619 [INFO][5829] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:36.627647 containerd[1593]: 2026-01-24 00:58:36.623 [INFO][5820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.627647 containerd[1593]: time="2026-01-24T00:58:36.627617053Z" level=info msg="TearDown network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\" successfully" Jan 24 00:58:36.628154 containerd[1593]: time="2026-01-24T00:58:36.627656882Z" level=info msg="StopPodSandbox for \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\" returns successfully" Jan 24 00:58:36.628631 containerd[1593]: time="2026-01-24T00:58:36.628507588Z" level=info msg="RemovePodSandbox for \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\"" Jan 24 00:58:36.628631 containerd[1593]: time="2026-01-24T00:58:36.628553619Z" level=info msg="Forcibly stopping sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\"" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.696 [WARNING][5846] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0", GenerateName:"calico-apiserver-89796fd66-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b98a207-ae40-4df7-81ed-b24949ca269a", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89796fd66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"558e780cf0d0c121c8a27a24138f8feb791ffe53027757f84ccf31bb2be88a24", Pod:"calico-apiserver-89796fd66-rbmf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a6a76a2a47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.696 [INFO][5846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.697 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" iface="eth0" netns="" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.697 [INFO][5846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.697 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.734 [INFO][5855] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.735 [INFO][5855] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.735 [INFO][5855] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.744 [WARNING][5855] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.744 [INFO][5855] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" HandleID="k8s-pod-network.128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Workload="localhost-k8s-calico--apiserver--89796fd66--rbmf5-eth0" Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.747 [INFO][5855] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:58:36.753429 containerd[1593]: 2026-01-24 00:58:36.750 [INFO][5846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1" Jan 24 00:58:36.754226 containerd[1593]: time="2026-01-24T00:58:36.753480336Z" level=info msg="TearDown network for sandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\" successfully" Jan 24 00:58:36.759078 containerd[1593]: time="2026-01-24T00:58:36.758950522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:58:36.759078 containerd[1593]: time="2026-01-24T00:58:36.759047464Z" level=info msg="RemovePodSandbox \"128bc8b9782c8f942548bb33e84c85197bae6d8e1eb08bedb08f3e42814b83b1\" returns successfully" Jan 24 00:58:37.483905 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:58346.service - OpenSSH per-connection server daemon (10.0.0.1:58346). Jan 24 00:58:37.533991 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 58346 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:37.536647 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:37.542685 systemd-logind[1562]: New session 14 of user core. Jan 24 00:58:37.552671 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:58:37.715172 sshd[5863]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:37.719864 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:58346.service: Deactivated successfully. Jan 24 00:58:37.723151 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:58:37.723420 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:58:37.725830 systemd-logind[1562]: Removed session 14. Jan 24 00:58:39.526210 kubelet[2664]: E0124 00:58:39.526152 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-689f665f8b-hwn7z" podUID="c47f04db-70a5-460c-8f3b-ca0ed30f8b3a" Jan 24 00:58:42.726665 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:58348.service - OpenSSH per-connection server daemon (10.0.0.1:58348). Jan 24 00:58:42.768774 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 58348 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:42.770984 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:42.776569 systemd-logind[1562]: New session 15 of user core. Jan 24 00:58:42.790675 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:58:42.927028 sshd[5881]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:42.934793 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:58356.service - OpenSSH per-connection server daemon (10.0.0.1:58356). Jan 24 00:58:42.935648 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:58348.service: Deactivated successfully. Jan 24 00:58:42.938990 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:58:42.941115 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:58:42.942678 systemd-logind[1562]: Removed session 15. Jan 24 00:58:42.989915 sshd[5893]: Accepted publickey for core from 10.0.0.1 port 58356 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:42.992092 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:42.998327 systemd-logind[1562]: New session 16 of user core. Jan 24 00:58:43.008662 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:58:43.294530 sshd[5893]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:43.304609 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:58368.service - OpenSSH per-connection server daemon (10.0.0.1:58368). Jan 24 00:58:43.305583 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:58356.service: Deactivated successfully. Jan 24 00:58:43.307692 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:58:43.309761 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:58:43.311938 systemd-logind[1562]: Removed session 16. Jan 24 00:58:43.349143 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 58368 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:43.350909 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:43.357318 systemd-logind[1562]: New session 17 of user core. Jan 24 00:58:43.364607 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:58:43.525498 kubelet[2664]: E0124 00:58:43.524888 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:43.526687 kubelet[2664]: E0124 00:58:43.526538 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:43.960814 sshd[5908]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:43.972798 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:58372.service - OpenSSH per-connection server daemon (10.0.0.1:58372). Jan 24 00:58:43.982807 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:58368.service: Deactivated successfully. Jan 24 00:58:43.992380 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:58:43.992621 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:58:43.994916 systemd-logind[1562]: Removed session 17. Jan 24 00:58:44.031795 sshd[5928]: Accepted publickey for core from 10.0.0.1 port 58372 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:44.033814 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:44.041159 systemd-logind[1562]: New session 18 of user core. Jan 24 00:58:44.053685 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:58:44.302431 sshd[5928]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:44.315592 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:58384.service - OpenSSH per-connection server daemon (10.0.0.1:58384). Jan 24 00:58:44.316426 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:58372.service: Deactivated successfully. Jan 24 00:58:44.320992 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:58:44.327785 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:58:44.330110 systemd-logind[1562]: Removed session 18. Jan 24 00:58:44.362458 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 58384 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:44.364901 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:44.377158 systemd-logind[1562]: New session 19 of user core. Jan 24 00:58:44.381844 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:58:44.530321 kubelet[2664]: E0124 00:58:44.529381 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:44.531386 sshd[5942]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:44.536776 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:58384.service: Deactivated successfully. Jan 24 00:58:44.546499 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:58:44.547868 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:58:44.549655 systemd-logind[1562]: Removed session 19. Jan 24 00:58:46.523815 kubelet[2664]: E0124 00:58:46.523709 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:47.524430 kubelet[2664]: E0124 00:58:47.524186 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:47.525421 kubelet[2664]: E0124 00:58:47.525198 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:47.673944 kubelet[2664]: E0124 00:58:47.673831 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:49.524967 kubelet[2664]: E0124 00:58:49.524889 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" podUID="0b98a207-ae40-4df7-81ed-b24949ca269a" Jan 24 00:58:49.545685 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:54452.service - OpenSSH per-connection server daemon (10.0.0.1:54452). Jan 24 00:58:49.588325 sshd[5991]: Accepted publickey for core from 10.0.0.1 port 54452 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:49.589880 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:49.595120 systemd-logind[1562]: New session 20 of user core. Jan 24 00:58:49.602694 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:58:49.729553 sshd[5991]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:49.734854 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:54452.service: Deactivated successfully. Jan 24 00:58:49.739068 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:58:49.739193 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:58:49.742038 systemd-logind[1562]: Removed session 20. Jan 24 00:58:51.525092 containerd[1593]: time="2026-01-24T00:58:51.524950218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:51.596861 containerd[1593]: time="2026-01-24T00:58:51.596755363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:51.598417 containerd[1593]: time="2026-01-24T00:58:51.598230618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:51.598611 containerd[1593]: time="2026-01-24T00:58:51.598410137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:58:51.598779 kubelet[2664]: E0124 00:58:51.598700 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:51.598779 kubelet[2664]: E0124 00:58:51.598774 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:51.599432 kubelet[2664]: E0124 00:58:51.599068 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f89e7f4884e54659bfd281d3a9e6919f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwfc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-689f665f8b-hwn7z_calico-system(c47f04db-70a5-460c-8f3b-ca0ed30f8b3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:51.602471 containerd[1593]: time="2026-01-24T00:58:51.601740545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:51.663113 containerd[1593]: time="2026-01-24T00:58:51.662951351Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:51.665491 containerd[1593]: time="2026-01-24T00:58:51.665021664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:51.665491 containerd[1593]: time="2026-01-24T00:58:51.665077959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:51.665614 kubelet[2664]: E0124 00:58:51.665427 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:51.665614 kubelet[2664]: E0124 00:58:51.665472 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:51.665614 kubelet[2664]: E0124 00:58:51.665569 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwfc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-689f665f8b-hwn7z_calico-system(c47f04db-70a5-460c-8f3b-ca0ed30f8b3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:51.667494 kubelet[2664]: E0124 00:58:51.667235 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-689f665f8b-hwn7z" podUID="c47f04db-70a5-460c-8f3b-ca0ed30f8b3a" Jan 24 00:58:54.753675 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:40236.service - OpenSSH per-connection server daemon (10.0.0.1:40236). Jan 24 00:58:54.794940 sshd[6009]: Accepted publickey for core from 10.0.0.1 port 40236 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:58:54.797033 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:54.804980 systemd-logind[1562]: New session 21 of user core. Jan 24 00:58:54.810779 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:58:54.962920 sshd[6009]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:54.968702 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:40236.service: Deactivated successfully. Jan 24 00:58:54.973976 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:58:54.974544 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:58:54.976882 systemd-logind[1562]: Removed session 21. Jan 24 00:58:55.524922 containerd[1593]: time="2026-01-24T00:58:55.524681234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:55.791962 containerd[1593]: time="2026-01-24T00:58:55.791709517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:55.793416 containerd[1593]: time="2026-01-24T00:58:55.793221986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:55.793416 containerd[1593]: time="2026-01-24T00:58:55.793357469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:55.793765 kubelet[2664]: E0124 00:58:55.793665 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:55.793765 kubelet[2664]: E0124 00:58:55.793714 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:55.794773 kubelet[2664]: E0124 00:58:55.793817 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5dh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-89796fd66-7lkh6_calico-apiserver(0f4fb867-3038-43f2-9206-1156c12b1931): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:55.795456 kubelet[2664]: E0124 00:58:55.795387 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-7lkh6" podUID="0f4fb867-3038-43f2-9206-1156c12b1931" Jan 24 00:58:56.526226 containerd[1593]: time="2026-01-24T00:58:56.526049816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:58:56.599089 containerd[1593]: time="2026-01-24T00:58:56.598845948Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:56.600561 containerd[1593]: time="2026-01-24T00:58:56.600485343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:58:56.600617 containerd[1593]: time="2026-01-24T00:58:56.600595211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:58:56.600858 kubelet[2664]: E0124 00:58:56.600800 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:56.600908 kubelet[2664]: E0124 00:58:56.600875 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:56.601389 kubelet[2664]: E0124 00:58:56.601169 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwx2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:56.601544 containerd[1593]: time="2026-01-24T00:58:56.601439520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:58:56.664750 containerd[1593]: time="2026-01-24T00:58:56.664656519Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:56.667844 containerd[1593]: time="2026-01-24T00:58:56.667785116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:58:56.668125 containerd[1593]: time="2026-01-24T00:58:56.667842997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:56.668602 kubelet[2664]: E0124 00:58:56.668108 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:56.668602 kubelet[2664]: E0124 00:58:56.668166 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:56.668691 containerd[1593]: time="2026-01-24T00:58:56.668597870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:58:56.669659 kubelet[2664]: E0124 00:58:56.669481 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hntzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bb8c95968-8n99c_calico-system(3c0ff763-a567-4414-bf09-8f7990c6e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:56.672179 kubelet[2664]: E0124 00:58:56.672128 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bb8c95968-8n99c" podUID="3c0ff763-a567-4414-bf09-8f7990c6e756" Jan 24 00:58:56.730167 containerd[1593]: time="2026-01-24T00:58:56.729914646Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:56.731674 containerd[1593]: time="2026-01-24T00:58:56.731536248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:58:56.731728 containerd[1593]: time="2026-01-24T00:58:56.731664789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:58:56.732199 kubelet[2664]: E0124 00:58:56.732029 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:56.732199 kubelet[2664]: E0124 00:58:56.732098 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:56.732534 kubelet[2664]: E0124 00:58:56.732336 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwx2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7gdj7_calico-system(b7b68612-f671-4faf-9c72-eb6b0593666c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:56.733703 kubelet[2664]: E0124 00:58:56.733605 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7gdj7" podUID="b7b68612-f671-4faf-9c72-eb6b0593666c" Jan 24 00:58:59.524554 kubelet[2664]: E0124 00:58:59.524352 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:59.526796 containerd[1593]: time="2026-01-24T00:58:59.526487574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:58:59.594234 containerd[1593]: time="2026-01-24T00:58:59.594054749Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:59.596043 containerd[1593]: time="2026-01-24T00:58:59.595795687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:58:59.596043 containerd[1593]: time="2026-01-24T00:58:59.595949226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:59.596524 kubelet[2664]: E0124 00:58:59.596429 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:59.596524 kubelet[2664]: E0124 00:58:59.596496 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:59.596892 kubelet[2664]: E0124 00:58:59.596704 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hs5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wrk6w_calico-system(eb625f4a-3376-469f-90ac-91f293666e81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:59.598390 kubelet[2664]: E0124 00:58:59.598358 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrk6w" podUID="eb625f4a-3376-469f-90ac-91f293666e81" Jan 24 00:58:59.973641 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:40250.service - OpenSSH per-connection server daemon (10.0.0.1:40250). Jan 24 00:59:00.028674 sshd[6025]: Accepted publickey for core from 10.0.0.1 port 40250 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:00.031130 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:00.038468 systemd-logind[1562]: New session 22 of user core. Jan 24 00:59:00.046599 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:59:00.204974 sshd[6025]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:00.209987 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:40250.service: Deactivated successfully. Jan 24 00:59:00.213310 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:59:00.213323 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:59:00.215371 systemd-logind[1562]: Removed session 22. Jan 24 00:59:03.526309 kubelet[2664]: E0124 00:59:03.526169 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-689f665f8b-hwn7z" podUID="c47f04db-70a5-460c-8f3b-ca0ed30f8b3a" Jan 24 00:59:04.529928 containerd[1593]: time="2026-01-24T00:59:04.529052668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:59:04.594208 containerd[1593]: time="2026-01-24T00:59:04.593990738Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:59:04.596407 containerd[1593]: time="2026-01-24T00:59:04.596217073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:59:04.596544 containerd[1593]: time="2026-01-24T00:59:04.596422610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:59:04.596810 kubelet[2664]: E0124 00:59:04.596635 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:04.596810 kubelet[2664]: E0124 00:59:04.596787 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:04.597338 kubelet[2664]: E0124 00:59:04.596938 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw8pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-89796fd66-rbmf5_calico-apiserver(0b98a207-ae40-4df7-81ed-b24949ca269a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:04.598526 kubelet[2664]: E0124 00:59:04.598410 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-89796fd66-rbmf5" podUID="0b98a207-ae40-4df7-81ed-b24949ca269a" Jan 24 00:59:05.216723 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:54348.service - OpenSSH per-connection server daemon (10.0.0.1:54348). Jan 24 00:59:05.254740 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 54348 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:05.256935 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:05.264112 systemd-logind[1562]: New session 23 of user core. Jan 24 00:59:05.273730 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:59:05.420537 sshd[6048]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:05.425042 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:54348.service: Deactivated successfully. Jan 24 00:59:05.428108 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:59:05.428114 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:59:05.430203 systemd-logind[1562]: Removed session 23.