Jan 28 00:54:32.906001 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 00:54:32.906043 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:54:32.906064 kernel: BIOS-provided physical RAM map: Jan 28 00:54:32.906075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 00:54:32.906083 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 00:54:32.906091 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 00:54:32.906100 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 00:54:32.906108 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 00:54:32.906118 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 00:54:32.906133 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 00:54:32.906145 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 00:54:32.906154 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 00:54:32.906216 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 00:54:32.906226 kernel: NX (Execute Disable) protection: active Jan 28 00:54:32.906236 kernel: APIC: Static calls initialized Jan 28 00:54:32.906298 kernel: SMBIOS 2.8 present. Jan 28 00:54:32.906310 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 00:54:32.906321 kernel: Hypervisor detected: KVM Jan 28 00:54:32.906330 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 00:54:32.906339 kernel: kvm-clock: using sched offset of 13448110282 cycles Jan 28 00:54:32.906348 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:54:32.906358 kernel: tsc: Detected 2445.424 MHz processor Jan 28 00:54:32.906370 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 00:54:32.906379 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 00:54:32.906394 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 00:54:32.906461 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 00:54:32.906472 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 00:54:32.906484 kernel: Using GB pages for direct mapping Jan 28 00:54:32.906493 kernel: ACPI: Early table checksum verification disabled Jan 28 00:54:32.906502 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 00:54:32.906511 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:54:32.906520 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:54:32.906531 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:54:32.906546 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 00:54:32.906556 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:54:32.906564 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:54:32.906577 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:54:32.906587 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:54:32.906596 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 00:54:32.906605 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 00:54:32.906622 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 00:54:32.906638 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 00:54:32.906647 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 00:54:32.906656 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 00:54:32.906666 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 00:54:32.906677 kernel: No NUMA configuration found Jan 28 00:54:32.906689 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 00:54:32.906706 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 28 00:54:32.906716 kernel: Zone ranges: Jan 28 00:54:32.906725 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 00:54:32.906734 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 00:54:32.906746 kernel: Normal empty Jan 28 00:54:32.906756 kernel: Movable zone start for each node Jan 28 00:54:32.906765 kernel: Early memory node ranges Jan 28 00:54:32.906774 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 00:54:32.906784 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 00:54:32.906796 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 00:54:32.906813 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 00:54:32.906875 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 00:54:32.906887 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 00:54:32.906970 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 00:54:32.906982 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 00:54:32.906993 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 00:54:32.907004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 00:54:32.907015 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 00:54:32.907028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 00:54:32.907044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 00:54:32.907053 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 00:54:32.907062 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 00:54:32.907071 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 00:54:32.907083 kernel: TSC deadline timer available Jan 28 00:54:32.907094 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 00:54:32.907104 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 00:54:32.907113 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 00:54:32.907170 kernel: kvm-guest: setup PV sched yield Jan 28 00:54:32.907190 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 00:54:32.907202 kernel: Booting paravirtualized kernel on KVM Jan 28 00:54:32.907211 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 00:54:32.907220 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 00:54:32.907229 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 00:54:32.907239 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 00:54:32.907252 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 00:54:32.907261 kernel: kvm-guest: PV spinlocks enabled Jan 28 00:54:32.907270 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 00:54:32.907286 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:54:32.907298 kernel: random: crng init done Jan 28 00:54:32.907310 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:54:32.907319 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:54:32.907328 kernel: Fallback order for Node 0: 0 Jan 28 00:54:32.907337 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 28 00:54:32.907349 kernel: Policy zone: DMA32 Jan 28 00:54:32.907360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:54:32.907375 kernel: Memory: 2434612K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136880K reserved, 0K cma-reserved) Jan 28 00:54:32.907384 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 00:54:32.907395 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 00:54:32.907458 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 00:54:32.907469 kernel: Dynamic Preempt: voluntary Jan 28 00:54:32.907481 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:54:32.907498 kernel: rcu: RCU event tracing is enabled. Jan 28 00:54:32.907508 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 00:54:32.907519 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:54:32.907536 kernel: Rude variant of Tasks RCU enabled. Jan 28 00:54:32.907546 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:54:32.907555 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:54:32.907565 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 00:54:32.907615 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 00:54:32.907629 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:54:32.907640 kernel: Console: colour VGA+ 80x25 Jan 28 00:54:32.907649 kernel: printk: console [ttyS0] enabled Jan 28 00:54:32.907658 kernel: ACPI: Core revision 20230628 Jan 28 00:54:32.907667 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 00:54:32.907686 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 00:54:32.907696 kernel: x2apic enabled Jan 28 00:54:32.907705 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 00:54:32.907714 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 00:54:32.907725 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 00:54:32.907737 kernel: kvm-guest: setup PV IPIs Jan 28 00:54:32.907747 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 00:54:32.907775 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 00:54:32.907787 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 28 00:54:32.907796 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 00:54:32.907806 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 00:54:32.907823 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 00:54:32.907835 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 00:54:32.907844 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 00:54:32.907854 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 00:54:32.907865 kernel: Speculative Store Bypass: Vulnerable Jan 28 00:54:32.907882 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 00:54:32.908057 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 00:54:32.908070 kernel: active return thunk: srso_alias_return_thunk Jan 28 00:54:32.908082 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 00:54:32.908094 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 00:54:32.908104 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 00:54:32.908114 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 00:54:32.908124 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 00:54:32.908144 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 00:54:32.908154 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 00:54:32.908164 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 00:54:32.908174 kernel: Freeing SMP alternatives memory: 32K Jan 28 00:54:32.908186 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:54:32.908199 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 00:54:32.908208 kernel: landlock: Up and running. Jan 28 00:54:32.908218 kernel: SELinux: Initializing. Jan 28 00:54:32.908227 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:54:32.908244 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:54:32.908256 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 00:54:32.908269 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:54:32.908279 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:54:32.908289 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:54:32.908298 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 00:54:32.908310 kernel: signal: max sigframe size: 1776 Jan 28 00:54:32.908322 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:54:32.908379 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:54:32.908396 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 00:54:32.908463 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:54:32.908475 kernel: smpboot: x86: Booting SMP configuration: Jan 28 00:54:32.908487 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 00:54:32.908497 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 00:54:32.908553 kernel: smpboot: Max logical packages: 1 Jan 28 00:54:32.908564 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 28 00:54:32.908573 kernel: devtmpfs: initialized Jan 28 00:54:32.908585 kernel: x86/mm: Memory block size: 128MB Jan 28 00:54:32.908602 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:54:32.908612 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 00:54:32.908622 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:54:32.908633 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:54:32.908646 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:54:32.908655 kernel: audit: type=2000 audit(1769561666.403:1): state=initialized audit_enabled=0 res=1 Jan 28 00:54:32.908665 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:54:32.908676 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 00:54:32.908689 kernel: cpuidle: using governor menu Jan 28 00:54:32.908704 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:54:32.908714 kernel: dca service started, version 1.12.1 Jan 28 00:54:32.908725 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 00:54:32.908738 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 00:54:32.908748 kernel: PCI: Using configuration type 1 for base access Jan 28 00:54:32.908758 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 00:54:32.908768 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:54:32.908781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:54:32.908791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:54:32.908806 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:54:32.908817 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:54:32.908830 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:54:32.908840 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:54:32.908849 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:54:32.908859 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 00:54:32.908873 kernel: ACPI: Interpreter enabled Jan 28 00:54:32.908883 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 00:54:32.909023 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 00:54:32.909045 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 00:54:32.909057 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 00:54:32.909067 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 00:54:32.909076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 00:54:32.910013 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 00:54:32.910241 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 00:54:32.910510 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 00:54:32.910536 kernel: PCI host bridge to bus 0000:00 Jan 28 00:54:32.911051 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 00:54:32.911245 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 00:54:32.911499 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 00:54:32.911691 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 00:54:32.911880 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 00:54:32.912177 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 00:54:32.912376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 00:54:32.912997 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 00:54:32.913365 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 00:54:32.913643 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 28 00:54:32.913848 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 28 00:54:32.914163 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 28 00:54:32.914370 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 00:54:32.914859 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 00:54:32.915127 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 28 00:54:32.915282 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 28 00:54:32.915488 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 00:54:32.915741 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 00:54:32.915976 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 28 00:54:32.916133 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 28 00:54:32.916293 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 00:54:32.916632 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 00:54:32.916788 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 28 00:54:32.917131 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 28 00:54:32.918612 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 00:54:32.918765 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 28 00:54:32.919261 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 00:54:32.919559 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 00:54:32.919787 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 00:54:32.920017 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 28 00:54:32.920167 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 28 00:54:32.920447 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 00:54:32.920603 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 28 00:54:32.920619 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 00:54:32.920627 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 00:54:32.920635 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 00:54:32.920642 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 00:54:32.920650 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 00:54:32.920657 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 00:54:32.920664 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 00:54:32.920672 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 00:54:32.920679 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 00:54:32.920689 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 00:54:32.920697 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 00:54:32.920704 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 00:54:32.920712 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 00:54:32.920719 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 00:54:32.920726 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 00:54:32.920734 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 00:54:32.920741 kernel: iommu: Default domain type: Translated Jan 28 00:54:32.920748 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 00:54:32.920759 kernel: PCI: Using ACPI for IRQ routing Jan 28 00:54:32.920766 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 00:54:32.920773 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 00:54:32.920781 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 00:54:32.921067 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 00:54:32.921222 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 00:54:32.921367 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 00:54:32.921376 kernel: vgaarb: loaded Jan 28 00:54:32.921390 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 00:54:32.921448 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 00:54:32.921457 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 00:54:32.921465 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:54:32.921472 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:54:32.921479 kernel: pnp: PnP ACPI init Jan 28 00:54:32.922021 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 00:54:32.922035 kernel: pnp: PnP ACPI: found 6 devices Jan 28 00:54:32.922049 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 00:54:32.922056 kernel: NET: Registered PF_INET protocol family Jan 28 00:54:32.922063 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:54:32.922071 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:54:32.922078 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:54:32.922085 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:54:32.922092 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:54:32.922099 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:54:32.922107 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:54:32.922117 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:54:32.922124 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:54:32.922131 kernel: NET: Registered PF_XDP protocol family Jan 28 00:54:32.922278 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 00:54:32.922464 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 00:54:32.922603 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 00:54:32.922736 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 00:54:32.922869 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 00:54:32.923082 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 00:54:32.923099 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:54:32.923106 kernel: Initialise system trusted keyrings Jan 28 00:54:32.923113 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:54:32.923120 kernel: Key type asymmetric registered Jan 28 00:54:32.923127 kernel: Asymmetric key parser 'x509' registered Jan 28 00:54:32.923134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 00:54:32.923141 kernel: io scheduler mq-deadline registered Jan 28 00:54:32.923149 kernel: io scheduler kyber registered Jan 28 00:54:32.923156 kernel: io scheduler bfq registered Jan 28 00:54:32.923167 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 00:54:32.923175 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 00:54:32.923182 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 00:54:32.923190 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 00:54:32.923197 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:54:32.923204 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 00:54:32.923212 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 00:54:32.923219 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 00:54:32.923226 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 00:54:32.923237 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 00:54:32.923564 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 00:54:32.923712 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 00:54:32.923852 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T00:54:31 UTC (1769561671) Jan 28 00:54:32.924079 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 00:54:32.924092 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 00:54:32.924100 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:54:32.924112 kernel: Segment Routing with IPv6 Jan 28 00:54:32.924119 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:54:32.924127 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:54:32.924134 kernel: Key type dns_resolver registered Jan 28 00:54:32.924141 kernel: IPI shorthand broadcast: enabled Jan 28 00:54:32.924148 kernel: sched_clock: Marking stable (3528112617, 959117220)->(6200757783, -1713527946) Jan 28 00:54:32.924156 kernel: registered taskstats version 1 Jan 28 00:54:32.924163 kernel: Loading compiled-in X.509 certificates Jan 28 00:54:32.924171 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 00:54:32.924181 kernel: Key type .fscrypt registered Jan 28 00:54:32.924188 kernel: Key type fscrypt-provisioning registered Jan 28 00:54:32.924195 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:54:32.924202 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:54:32.924210 kernel: ima: No architecture policies found Jan 28 00:54:32.924217 kernel: clk: Disabling unused clocks Jan 28 00:54:32.924224 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 00:54:32.924231 kernel: Write protecting the kernel read-only data: 36864k Jan 28 00:54:32.924239 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 00:54:32.924249 kernel: Run /init as init process Jan 28 00:54:32.924256 kernel: with arguments: Jan 28 00:54:32.924263 kernel: /init Jan 28 00:54:32.924270 kernel: with environment: Jan 28 00:54:32.924277 kernel: HOME=/ Jan 28 00:54:32.924284 kernel: TERM=linux Jan 28 00:54:32.924294 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:54:32.924303 systemd[1]: Detected virtualization kvm. Jan 28 00:54:32.924314 systemd[1]: Detected architecture x86-64. Jan 28 00:54:32.924321 systemd[1]: Running in initrd. Jan 28 00:54:32.924328 systemd[1]: No hostname configured, using default hostname. Jan 28 00:54:32.924336 systemd[1]: Hostname set to . Jan 28 00:54:32.924343 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:54:32.924351 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:54:32.924359 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:54:32.924366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:54:32.924387 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:54:32.924395 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:54:32.924448 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:54:32.924457 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:54:32.924466 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:54:32.924473 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:54:32.924481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:54:32.924493 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:54:32.924500 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:54:32.924508 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:54:32.924516 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:54:32.924550 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:54:32.924561 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:54:32.924572 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:54:32.924580 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:54:32.924587 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 00:54:32.924595 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:54:32.924603 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:54:32.924610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:54:32.924618 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:54:32.924626 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:54:32.924634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:54:32.924644 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:54:32.924652 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:54:32.924660 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:54:32.924667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:54:32.924675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:54:32.924683 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:54:32.924691 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:54:32.924698 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:54:32.924736 systemd-journald[194]: Collecting audit messages is disabled. Jan 28 00:54:32.924760 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:54:32.924768 systemd-journald[194]: Journal started Jan 28 00:54:32.924784 systemd-journald[194]: Runtime Journal (/run/log/journal/c9c04a4eff434172afe6b6471e06efff) is 6.0M, max 48.4M, 42.3M free. Jan 28 00:54:32.879694 systemd-modules-load[195]: Inserted module 'overlay' Jan 28 00:54:33.533883 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:54:33.263986 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:54:33.525523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:54:33.532682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:54:33.564522 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:54:33.593139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:54:33.611516 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:54:33.612224 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:54:33.618055 kernel: Bridge firewalling registered Jan 28 00:54:33.615884 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 28 00:54:33.629258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:54:33.655355 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:54:33.665746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:54:33.673203 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:54:33.693265 dracut-cmdline[223]: dracut-dracut-053 Jan 28 00:54:33.693265 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:54:33.707728 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:54:33.724205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:54:33.748371 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:54:33.789792 systemd-resolved[282]: Positive Trust Anchors: Jan 28 00:54:33.789846 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:54:33.789874 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:54:33.793385 systemd-resolved[282]: Defaulting to hostname 'linux'. Jan 28 00:54:33.843728 kernel: SCSI subsystem initialized Jan 28 00:54:33.796172 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:54:33.801726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:54:33.858998 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:54:33.877286 kernel: iscsi: registered transport (tcp) Jan 28 00:54:33.914168 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:54:33.914318 kernel: QLogic iSCSI HBA Driver Jan 28 00:54:34.022021 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:54:34.046268 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:54:34.093034 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:54:34.093117 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:54:34.098067 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 00:54:34.163092 kernel: raid6: avx2x4 gen() 29654 MB/s Jan 28 00:54:34.182090 kernel: raid6: avx2x2 gen() 26995 MB/s Jan 28 00:54:34.203599 kernel: raid6: avx2x1 gen() 22813 MB/s Jan 28 00:54:34.203690 kernel: raid6: using algorithm avx2x4 gen() 29654 MB/s Jan 28 00:54:34.225270 kernel: raid6: .... xor() 3735 MB/s, rmw enabled Jan 28 00:54:34.225362 kernel: raid6: using avx2x2 recovery algorithm Jan 28 00:54:34.261117 kernel: xor: automatically using best checksumming function avx Jan 28 00:54:34.493008 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:54:34.513729 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:54:34.534263 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:54:34.558047 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 28 00:54:34.566483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:54:34.593159 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:54:34.620097 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jan 28 00:54:34.673711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:54:34.699394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:54:34.830371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:54:34.856540 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:54:34.878991 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:54:34.888983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:54:34.894753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:54:34.899877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:54:34.926013 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 00:54:34.931394 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:54:34.949470 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 00:54:34.953629 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 00:54:34.957537 kernel: GPT:9289727 != 19775487 Jan 28 00:54:34.957593 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 00:54:34.962220 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 00:54:34.962270 kernel: GPT:9289727 != 19775487 Jan 28 00:54:34.966607 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 00:54:34.966652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:54:34.972600 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:54:34.987653 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:54:34.992795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:54:35.007039 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:54:35.024078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:54:35.046871 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Jan 28 00:54:35.024511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:54:35.047204 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:54:35.066011 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (479) Jan 28 00:54:35.081045 kernel: libata version 3.00 loaded. Jan 28 00:54:35.085066 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 00:54:35.086625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:54:35.099978 kernel: AES CTR mode by8 optimization enabled Jan 28 00:54:35.106125 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 00:54:35.119975 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 00:54:35.120206 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 00:54:35.123677 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:54:35.146441 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 00:54:35.166093 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 00:54:35.166348 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 00:54:35.166658 kernel: scsi host0: ahci Jan 28 00:54:35.167078 kernel: scsi host1: ahci Jan 28 00:54:35.172512 kernel: scsi host2: ahci Jan 28 00:54:35.172774 kernel: scsi host3: ahci Jan 28 00:54:35.173111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 00:54:35.190806 kernel: scsi host4: ahci Jan 28 00:54:35.191024 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 00:54:36.857878 kernel: scsi host5: ahci Jan 28 00:54:36.858760 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 28 00:54:36.858779 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 28 00:54:36.866458 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 28 00:54:36.866491 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 28 00:54:36.866099 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:54:36.892653 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 28 00:54:36.892676 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 28 00:54:36.892689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:54:36.891053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:54:36.908587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:54:36.924221 disk-uuid[563]: Primary Header is updated. Jan 28 00:54:36.924221 disk-uuid[563]: Secondary Entries is updated. Jan 28 00:54:36.924221 disk-uuid[563]: Secondary Header is updated. Jan 28 00:54:36.935506 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:54:37.190988 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 00:54:37.197037 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 00:54:37.197142 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 00:54:37.205090 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 00:54:37.210104 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 00:54:37.210179 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 00:54:37.216071 kernel: ata3.00: applying bridge limits Jan 28 00:54:37.221087 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 00:54:37.225076 kernel: ata3.00: configured for UDMA/100 Jan 28 00:54:37.231078 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 00:54:37.291673 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 00:54:37.292317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:54:37.310227 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 00:54:37.929010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:54:37.929995 disk-uuid[568]: The operation has completed successfully. Jan 28 00:54:37.973203 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:54:37.973487 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:54:38.002224 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:54:38.013214 sh[596]: Success Jan 28 00:54:38.040012 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 00:54:38.102724 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:54:38.118078 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:54:38.125207 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:54:38.153354 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 00:54:38.153457 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:54:38.153479 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 00:54:38.157609 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:54:38.160670 kernel: BTRFS info (device dm-0): using free space tree Jan 28 00:54:38.176659 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:54:38.181631 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:54:38.198216 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:54:38.203093 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:54:38.252001 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:54:38.252074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:54:38.252087 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:54:38.265016 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:54:38.281763 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 00:54:38.291302 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:54:38.299553 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:54:38.312309 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:54:38.562467 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:54:38.603487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:54:38.634692 systemd-networkd[783]: lo: Link UP Jan 28 00:54:38.634754 systemd-networkd[783]: lo: Gained carrier Jan 28 00:54:38.637595 systemd-networkd[783]: Enumeration completed Jan 28 00:54:38.639170 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:54:38.639175 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:54:38.640102 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:54:38.668729 ignition[712]: Ignition 2.19.0 Jan 28 00:54:38.644146 systemd-networkd[783]: eth0: Link UP Jan 28 00:54:38.668740 ignition[712]: Stage: fetch-offline Jan 28 00:54:38.644151 systemd-networkd[783]: eth0: Gained carrier Jan 28 00:54:38.668869 ignition[712]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:54:38.644160 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:54:38.668888 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:54:38.650310 systemd[1]: Reached target network.target - Network. Jan 28 00:54:38.669194 ignition[712]: parsed url from cmdline: "" Jan 28 00:54:38.692013 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:54:38.669203 ignition[712]: no config URL provided Jan 28 00:54:38.669212 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:54:38.669231 ignition[712]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:54:38.669274 ignition[712]: op(1): [started] loading QEMU firmware config module Jan 28 00:54:38.669282 ignition[712]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 00:54:38.742564 ignition[712]: op(1): [finished] loading QEMU firmware config module Jan 28 00:54:39.112168 ignition[712]: parsing config with SHA512: 0b0922591767e45804b30932cb60efd549df8c475d8e5acb56dcc20f476a99a25fc31ac90a4843162fc6cfdd9e937e4a44c92079c569c346386e60fbfa9d8768 Jan 28 00:54:39.126264 unknown[712]: fetched base config from "system" Jan 28 00:54:39.126278 unknown[712]: fetched user config from "qemu" Jan 28 00:54:39.127884 ignition[712]: fetch-offline: fetch-offline passed Jan 28 00:54:39.128223 ignition[712]: Ignition finished successfully Jan 28 00:54:39.143651 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:54:39.148467 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 00:54:39.176324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:54:39.319777 ignition[790]: Ignition 2.19.0 Jan 28 00:54:39.320155 ignition[790]: Stage: kargs Jan 28 00:54:39.320601 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:54:39.320619 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:54:39.321858 ignition[790]: kargs: kargs passed Jan 28 00:54:39.321998 ignition[790]: Ignition finished successfully Jan 28 00:54:39.343087 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:54:39.363278 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:54:39.403704 ignition[798]: Ignition 2.19.0 Jan 28 00:54:39.403750 ignition[798]: Stage: disks Jan 28 00:54:39.404025 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:54:39.404039 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:54:39.404812 ignition[798]: disks: disks passed Jan 28 00:54:39.404859 ignition[798]: Ignition finished successfully Jan 28 00:54:39.424106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:54:39.431683 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:54:39.443763 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:54:39.446636 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:54:39.456466 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:54:39.463851 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:54:39.485157 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:54:39.518551 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 00:54:39.527600 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:54:39.562213 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:54:39.757485 systemd-networkd[783]: eth0: Gained IPv6LL Jan 28 00:54:39.765705 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 00:54:39.765455 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:54:39.769363 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:54:39.793482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:54:39.802042 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:54:39.812732 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 00:54:39.813021 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:54:39.813053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:54:39.851137 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:54:39.870045 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Jan 28 00:54:39.871311 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:54:39.886548 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:54:39.886568 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:54:39.886579 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:54:39.894950 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:54:39.901534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:54:39.931374 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:54:39.946027 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:54:39.956710 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:54:39.966255 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:54:40.126102 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:54:40.143833 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:54:40.149018 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:54:40.168067 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:54:40.168499 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:54:40.192879 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:54:40.281365 ignition[929]: INFO : Ignition 2.19.0 Jan 28 00:54:40.281365 ignition[929]: INFO : Stage: mount Jan 28 00:54:40.288020 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:54:40.288020 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:54:40.301823 ignition[929]: INFO : mount: mount passed Jan 28 00:54:40.305131 ignition[929]: INFO : Ignition finished successfully Jan 28 00:54:40.304226 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:54:40.335302 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:54:40.780170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:54:40.793021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jan 28 00:54:40.801228 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:54:40.801253 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:54:40.801265 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:54:40.812039 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:54:40.814106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:54:40.855078 ignition[959]: INFO : Ignition 2.19.0 Jan 28 00:54:40.855078 ignition[959]: INFO : Stage: files Jan 28 00:54:40.860853 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:54:40.860853 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:54:40.870312 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:54:40.874860 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:54:40.874860 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:54:40.891211 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:54:40.897031 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:54:40.902809 unknown[959]: wrote ssh authorized keys file for user: core Jan 28 00:54:40.906586 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:54:40.914171 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 00:54:40.921670 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 00:54:40.993221 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:54:41.292401 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 00:54:41.292401 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:54:41.311295 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 28 00:54:41.617551 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 00:54:43.383228 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:54:43.383228 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 00:54:43.399646 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 00:54:43.474811 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:54:43.484468 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:54:43.490702 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 00:54:43.490702 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:54:43.490702 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:54:43.508243 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:54:43.508243 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:54:43.508243 ignition[959]: INFO : files: files passed Jan 28 00:54:43.508243 ignition[959]: INFO : Ignition finished successfully Jan 28 00:54:43.509805 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:54:43.535273 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:54:43.544750 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:54:43.553215 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:54:43.553370 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:54:43.576140 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 00:54:43.581623 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:54:43.581623 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:54:43.605587 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:54:43.632362 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:54:43.648580 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:54:43.664122 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:54:43.728671 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:54:43.729040 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:54:43.739200 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:54:43.749658 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:54:43.753598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:54:43.772051 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:54:43.789723 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:54:43.819870 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:54:43.876061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:54:43.884153 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:54:43.891704 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:54:43.893495 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:54:43.893811 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:54:43.896244 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:54:43.896690 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:54:43.897983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:54:43.898579 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:54:43.907361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:54:43.908012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:54:43.908763 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:54:43.912715 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:54:43.913075 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:54:44.066507 ignition[1014]: INFO : Ignition 2.19.0 Jan 28 00:54:44.066507 ignition[1014]: INFO : Stage: umount Jan 28 00:54:44.066507 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:54:44.066507 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:54:43.913605 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:54:44.108487 ignition[1014]: INFO : umount: umount passed Jan 28 00:54:44.108487 ignition[1014]: INFO : Ignition finished successfully Jan 28 00:54:43.914885 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:54:43.915143 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:54:43.916872 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:54:43.918736 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:54:43.920006 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:54:43.920471 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:54:43.920664 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:54:43.920835 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:54:43.922545 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:54:43.922852 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:54:43.923853 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:54:43.925051 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:54:43.925494 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:54:43.926887 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:54:43.927552 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:54:43.928699 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:54:43.928846 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:54:43.930034 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:54:43.930137 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:54:43.930608 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:54:43.930810 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:54:43.931879 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:54:43.932110 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:54:44.016488 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:54:44.022815 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:54:44.023073 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:54:44.051275 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:54:44.062845 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:54:44.063164 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:54:44.071574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:54:44.071803 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:54:44.098725 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:54:44.099030 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:54:44.105724 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:54:44.107106 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:54:44.107264 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:54:44.113407 systemd[1]: Stopped target network.target - Network. Jan 28 00:54:44.115960 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:54:44.116033 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:54:44.116665 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:54:44.116744 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:54:44.117979 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:54:44.118037 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:54:44.118643 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:54:44.118716 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:54:44.120259 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:54:44.120751 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:54:44.149191 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:54:44.149523 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:54:44.153887 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 28 00:54:44.160106 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:54:44.160385 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:54:44.172304 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:54:44.172393 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:54:44.195085 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:54:44.204333 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:54:44.204472 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:54:44.209972 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:54:44.210039 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:54:44.214561 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:54:44.214615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:54:44.219378 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:54:44.219482 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:54:44.227581 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:54:44.233802 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:54:44.234144 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:54:44.329779 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:54:44.330106 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:54:44.345161 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:54:44.345609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:54:44.357178 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:54:44.357496 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:54:44.369595 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:54:44.369685 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:54:44.375214 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:54:44.375275 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:54:44.377538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:54:44.377599 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:54:44.397495 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:54:44.397566 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:54:44.405721 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:54:44.405783 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:54:44.480123 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:54:44.492272 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:54:44.508182 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:54:44.655085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:54:44.655283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:54:44.678147 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:54:44.696520 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:54:44.709384 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:54:44.731134 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:54:44.746858 systemd[1]: Switching root. Jan 28 00:54:44.781734 systemd-journald[194]: Journal stopped Jan 28 00:54:47.780606 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 28 00:54:47.780761 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:54:47.780782 kernel: SELinux: policy capability open_perms=1 Jan 28 00:54:47.780799 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:54:47.780810 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:54:47.780822 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:54:47.780833 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:54:47.780844 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:54:47.780855 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:54:47.780866 kernel: audit: type=1403 audit(1769561684.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:54:47.780887 systemd[1]: Successfully loaded SELinux policy in 65.666ms. Jan 28 00:54:47.780986 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.121ms. Jan 28 00:54:47.781001 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:54:47.781013 systemd[1]: Detected virtualization kvm. Jan 28 00:54:47.781029 systemd[1]: Detected architecture x86-64. Jan 28 00:54:47.781041 systemd[1]: Detected first boot. Jan 28 00:54:47.781053 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:54:47.781065 zram_generator::config[1058]: No configuration found. Jan 28 00:54:47.781081 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:54:47.781093 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:54:47.781105 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:54:47.781119 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:54:47.781132 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:54:47.781144 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:54:47.781156 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:54:47.781168 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:54:47.781180 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:54:47.781195 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:54:47.781212 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:54:47.781223 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:54:47.781235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:54:47.781247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:54:47.781259 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:54:47.781271 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:54:47.781282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:54:47.781297 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:54:47.781309 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 00:54:47.781321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:54:47.781332 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:54:47.781344 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:54:47.781355 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:54:47.781367 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:54:47.781380 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:54:47.781395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:54:47.781406 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:54:47.781418 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:54:47.781483 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:54:47.781495 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:54:47.781506 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:54:47.781518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:54:47.781530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:54:47.781541 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:54:47.781557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:54:47.781569 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:54:47.781581 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:54:47.781593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:54:47.781604 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:54:47.781616 kernel: hrtimer: interrupt took 22745684 ns Jan 28 00:54:47.781628 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:54:47.781640 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:54:47.781652 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:54:47.781667 systemd[1]: Reached target machines.target - Containers. Jan 28 00:54:47.781679 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:54:47.781692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:54:47.781705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:54:47.781716 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:54:47.781728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:54:47.781740 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:54:47.781751 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:54:47.781766 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:54:47.781777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:54:47.781790 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:54:47.781802 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:54:47.781814 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:54:47.781825 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:54:47.781837 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:54:47.781848 kernel: fuse: init (API version 7.39) Jan 28 00:54:47.781859 kernel: loop: module loaded Jan 28 00:54:47.781873 kernel: ACPI: bus type drm_connector registered Jan 28 00:54:47.781885 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:54:47.781972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:54:47.781986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:54:47.782023 systemd-journald[1142]: Collecting audit messages is disabled. Jan 28 00:54:47.782045 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:54:47.782059 systemd-journald[1142]: Journal started Jan 28 00:54:47.782084 systemd-journald[1142]: Runtime Journal (/run/log/journal/c9c04a4eff434172afe6b6471e06efff) is 6.0M, max 48.4M, 42.3M free. Jan 28 00:54:45.903282 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:54:45.934206 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 00:54:45.935036 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:54:45.935576 systemd[1]: systemd-journald.service: Consumed 2.152s CPU time. Jan 28 00:54:47.898611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:54:47.910610 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 00:54:47.949173 systemd[1]: Stopped verity-setup.service. Jan 28 00:54:47.965168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:54:48.008009 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:54:48.078178 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:54:48.088355 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:54:48.095524 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:54:48.102285 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:54:48.110489 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:54:48.118797 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:54:48.125274 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:54:48.166726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:54:48.178226 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:54:48.178715 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:54:48.209279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:54:48.209760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:54:48.233812 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:54:48.234361 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:54:48.252105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:54:48.252821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:54:48.298502 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:54:48.299164 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:54:48.306321 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:54:48.306722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:54:48.314865 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:54:48.321576 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:54:48.327408 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:54:48.434272 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:54:48.541120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:54:48.566418 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:54:48.573373 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:54:48.573545 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:54:48.582110 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 00:54:48.598493 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:54:48.608355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:54:48.619337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:54:48.636353 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:54:48.693499 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:54:48.712149 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:54:48.718347 systemd-journald[1142]: Time spent on flushing to /var/log/journal/c9c04a4eff434172afe6b6471e06efff is 58.715ms for 936 entries. Jan 28 00:54:48.718347 systemd-journald[1142]: System Journal (/var/log/journal/c9c04a4eff434172afe6b6471e06efff) is 8.0M, max 195.6M, 187.6M free. Jan 28 00:54:48.818757 systemd-journald[1142]: Received client request to flush runtime journal. Jan 28 00:54:48.716561 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:54:48.726360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:54:48.730248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:54:48.776233 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:54:48.793301 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:54:48.841795 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:54:48.866743 kernel: loop0: detected capacity change from 0 to 140768 Jan 28 00:54:48.867024 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:54:48.875417 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:54:48.884712 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:54:48.894806 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:54:48.907297 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:54:48.916655 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:54:48.943547 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:54:48.975991 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:54:48.974362 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 00:54:48.998678 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 00:54:49.011557 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:54:49.031193 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:54:49.074113 kernel: loop1: detected capacity change from 0 to 219144 Jan 28 00:54:49.093215 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:54:49.094274 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 00:54:49.185158 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 00:54:49.216498 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 28 00:54:49.216535 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 28 00:54:49.230120 kernel: loop2: detected capacity change from 0 to 142488 Jan 28 00:54:49.258535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:54:49.334194 kernel: loop3: detected capacity change from 0 to 140768 Jan 28 00:54:49.417194 kernel: loop4: detected capacity change from 0 to 219144 Jan 28 00:54:49.478163 kernel: loop5: detected capacity change from 0 to 142488 Jan 28 00:54:49.506684 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 00:54:49.509827 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 28 00:54:49.524797 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:54:49.524879 systemd[1]: Reloading... Jan 28 00:54:49.693284 zram_generator::config[1220]: No configuration found. Jan 28 00:54:50.020524 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:54:50.145662 systemd[1]: Reloading finished in 619 ms. Jan 28 00:54:50.204266 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:54:50.221365 systemd[1]: Starting ensure-sysext.service... Jan 28 00:54:50.239623 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:54:50.258532 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:54:50.280412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:54:50.288053 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:54:50.288209 systemd[1]: Reloading... Jan 28 00:54:50.421331 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:54:50.422225 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:54:50.428201 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:54:50.428828 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 28 00:54:50.429152 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 28 00:54:50.449883 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:54:50.450186 systemd-tmpfiles[1259]: Skipping /boot Jan 28 00:54:50.452119 zram_generator::config[1287]: No configuration found. Jan 28 00:54:50.482251 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:54:50.482501 systemd-tmpfiles[1259]: Skipping /boot Jan 28 00:54:50.689611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:54:50.862830 systemd[1]: Reloading finished in 574 ms. Jan 28 00:54:50.889976 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:54:50.897004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:54:50.924539 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:54:50.931024 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:54:50.936091 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:54:50.946576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:54:50.959524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:54:50.966183 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:54:50.985529 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:54:50.994244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:54:50.994494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:54:50.998312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:54:51.023060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:54:51.033042 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 28 00:54:51.046994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:54:51.054405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:54:51.055350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:54:51.057374 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:54:51.066144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:54:51.066577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:54:51.077616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:54:51.078053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:54:51.092801 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:54:51.106685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:54:51.111694 augenrules[1356]: No rules Jan 28 00:54:51.117350 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:54:51.126216 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:54:51.135260 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:54:51.155331 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:54:51.155772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:54:51.163986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:54:51.174357 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:54:51.186199 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:54:51.221968 systemd[1]: Finished ensure-sysext.service. Jan 28 00:54:51.230396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:54:51.230657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:54:51.241352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:54:51.251240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:54:51.258180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:54:51.264180 systemd-resolved[1331]: Positive Trust Anchors: Jan 28 00:54:51.264251 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:54:51.264296 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:54:51.272422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:54:51.277544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:54:51.278393 systemd-resolved[1331]: Defaulting to hostname 'linux'. Jan 28 00:54:51.283182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:54:51.299259 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 00:54:51.305581 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:54:51.305664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:54:51.306357 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:54:51.519769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:54:51.520102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:54:51.534251 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:54:51.534575 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:54:51.545824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:54:51.547199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:54:51.557603 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:54:51.558127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:54:51.584634 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 00:54:51.589984 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1382) Jan 28 00:54:51.599109 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:54:51.605016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:54:51.605148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:54:51.624081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:54:51.638185 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:54:51.934867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:54:51.942191 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 00:54:51.958999 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:54:51.971018 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 28 00:54:51.978022 kernel: ACPI: button: Power Button [PWRF] Jan 28 00:54:51.990037 systemd-networkd[1396]: lo: Link UP Jan 28 00:54:51.990053 systemd-networkd[1396]: lo: Gained carrier Jan 28 00:54:51.996615 systemd-networkd[1396]: Enumeration completed Jan 28 00:54:51.996797 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:54:52.005866 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:54:52.006838 systemd[1]: Reached target network.target - Network. Jan 28 00:54:52.012833 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:54:52.041408 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 00:54:52.042167 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 00:54:52.042638 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 00:54:52.022076 systemd-networkd[1396]: eth0: Link UP Jan 28 00:54:52.022089 systemd-networkd[1396]: eth0: Gained carrier Jan 28 00:54:52.022122 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:54:52.039666 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:54:52.070841 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:54:52.072116 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Jan 28 00:54:52.074220 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 00:54:52.074838 systemd-timesyncd[1397]: Initial clock synchronization to Wed 2026-01-28 00:54:52.357955 UTC. Jan 28 00:54:52.088106 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 00:54:52.348267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:54:52.463005 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:54:52.500271 kernel: kvm_amd: TSC scaling supported Jan 28 00:54:52.500358 kernel: kvm_amd: Nested Virtualization enabled Jan 28 00:54:52.500374 kernel: kvm_amd: Nested Paging enabled Jan 28 00:54:52.506045 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 00:54:52.506088 kernel: kvm_amd: PMU virtualization is disabled Jan 28 00:54:52.692722 kernel: EDAC MC: Ver: 3.0.0 Jan 28 00:54:52.801289 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 00:54:53.021190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:54:53.054869 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 00:54:53.413920 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:54:53.465764 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 00:54:53.474889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:54:53.482529 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:54:53.488622 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:54:53.496577 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:54:53.504859 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:54:53.511670 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:54:53.519328 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:54:53.527477 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:54:53.527609 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:54:53.532660 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:54:53.537721 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:54:53.544468 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:54:53.756621 systemd-networkd[1396]: eth0: Gained IPv6LL Jan 28 00:54:53.765280 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:54:54.211070 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 00:54:54.268685 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:54:54.334079 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:54:54.380494 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:54:54.390219 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:54:54.408828 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:54:54.413015 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:54:54.413097 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:54:54.417157 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:54:54.424438 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 00:54:54.432315 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:54:54.442574 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:54:54.454583 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:54:54.461158 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:54:54.463560 jq[1435]: false Jan 28 00:54:54.464500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:54:54.473646 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:54:54.474757 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:54:54.486453 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:54:54.519257 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:54:54.532361 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:54:54.541183 extend-filesystems[1436]: Found loop3 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found loop4 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found loop5 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found sr0 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda1 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda2 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda3 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found usr Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda4 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda6 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda7 Jan 28 00:54:54.546868 extend-filesystems[1436]: Found vda9 Jan 28 00:54:54.546868 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 28 00:54:54.794228 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 28 00:54:54.797418 dbus-daemon[1434]: [system] SELinux support is enabled Jan 28 00:54:55.161113 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 00:54:55.161182 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 00:54:55.161211 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1382) Jan 28 00:54:55.159455 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:54:55.161585 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Jan 28 00:54:55.161585 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 00:54:55.161585 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 00:54:55.161585 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 00:54:55.201558 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 28 00:54:55.176383 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:54:55.195346 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:54:55.196442 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:54:55.250602 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:54:55.258311 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:54:55.268793 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:54:55.277557 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 00:54:55.286493 jq[1466]: true Jan 28 00:54:55.289078 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:54:55.290100 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:54:55.290575 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:54:55.291093 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:54:55.304670 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:54:55.305262 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:54:55.309735 update_engine[1465]: I20260128 00:54:55.308558 1465 main.cc:92] Flatcar Update Engine starting Jan 28 00:54:55.330425 update_engine[1465]: I20260128 00:54:55.312441 1465 update_check_scheduler.cc:74] Next update check in 11m6s Jan 28 00:54:55.314820 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:54:55.334335 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:54:55.334638 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:54:55.386629 jq[1471]: true Jan 28 00:54:55.416287 tar[1470]: linux-amd64/LICENSE Jan 28 00:54:55.416287 tar[1470]: linux-amd64/helm Jan 28 00:54:55.403360 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:54:55.409655 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 00:54:55.410100 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 00:54:55.417631 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (Power Button) Jan 28 00:54:55.417658 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:54:55.437384 systemd-logind[1461]: New seat seat0. Jan 28 00:54:55.443595 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:54:55.475191 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:54:55.486635 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:54:55.486866 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:54:55.487145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:54:55.492455 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:54:55.492676 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:54:55.507442 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:54:55.563135 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:54:55.614293 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:54:55.618176 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:54:55.645687 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:54:56.418637 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:54:56.424222 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:54:56.444623 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:54:56.468209 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:54:56.468600 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:54:56.490175 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:54:56.869075 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:54:57.305150 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:54:57.313444 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 00:54:57.319698 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:54:59.740168 containerd[1472]: time="2026-01-28T00:54:59.739357031Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 00:54:59.790778 containerd[1472]: time="2026-01-28T00:54:59.790724315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.796514052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.796569749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.796589180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.796895206Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.797037399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.797287536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.797307802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.797629168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.797646155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.797695307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:54:59.797995 containerd[1472]: time="2026-01-28T00:54:59.797706718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 00:54:59.798822 containerd[1472]: time="2026-01-28T00:54:59.798792907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:54:59.799652 containerd[1472]: time="2026-01-28T00:54:59.799628531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:54:59.800032 containerd[1472]: time="2026-01-28T00:54:59.799889977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:54:59.800102 containerd[1472]: time="2026-01-28T00:54:59.800085383Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 00:54:59.800376 containerd[1472]: time="2026-01-28T00:54:59.800338981Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 00:54:59.800538 containerd[1472]: time="2026-01-28T00:54:59.800517868Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:54:59.847257 containerd[1472]: time="2026-01-28T00:54:59.847206254Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 00:54:59.848056 containerd[1472]: time="2026-01-28T00:54:59.848035598Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 00:54:59.848317 containerd[1472]: time="2026-01-28T00:54:59.848226638Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 00:54:59.848616 containerd[1472]: time="2026-01-28T00:54:59.848595660Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 00:54:59.849006 containerd[1472]: time="2026-01-28T00:54:59.848989131Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 00:54:59.850151 containerd[1472]: time="2026-01-28T00:54:59.850040559Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 00:54:59.855024 containerd[1472]: time="2026-01-28T00:54:59.854994976Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 00:54:59.855775 containerd[1472]: time="2026-01-28T00:54:59.855751587Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 00:54:59.855993 containerd[1472]: time="2026-01-28T00:54:59.855875254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 00:54:59.856203 containerd[1472]: time="2026-01-28T00:54:59.856086907Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 00:54:59.856312 containerd[1472]: time="2026-01-28T00:54:59.856296004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.856798 containerd[1472]: time="2026-01-28T00:54:59.856778088Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857192547Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857219449Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857331432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857346141Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857360420Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857381490Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857526392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857545751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857561273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857620888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857637276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857651333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857719570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.857987 containerd[1472]: time="2026-01-28T00:54:59.857734531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.858289 containerd[1472]: time="2026-01-28T00:54:59.857749209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.858289 containerd[1472]: time="2026-01-28T00:54:59.857766247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.858289 containerd[1472]: time="2026-01-28T00:54:59.857781231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.858289 containerd[1472]: time="2026-01-28T00:54:59.857795715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.858289 containerd[1472]: time="2026-01-28T00:54:59.857845946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.858289 containerd[1472]: time="2026-01-28T00:54:59.857866914Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 00:54:59.858835 containerd[1472]: time="2026-01-28T00:54:59.858812822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.859123 containerd[1472]: time="2026-01-28T00:54:59.859033634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.859323 containerd[1472]: time="2026-01-28T00:54:59.859229051Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 00:54:59.860127 containerd[1472]: time="2026-01-28T00:54:59.860026263Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 00:54:59.861181 containerd[1472]: time="2026-01-28T00:54:59.861067849Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 00:54:59.861375 containerd[1472]: time="2026-01-28T00:54:59.861271094Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 00:54:59.863001 containerd[1472]: time="2026-01-28T00:54:59.861295563Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 00:54:59.863001 containerd[1472]: time="2026-01-28T00:54:59.861620145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.863001 containerd[1472]: time="2026-01-28T00:54:59.861642935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 00:54:59.863001 containerd[1472]: time="2026-01-28T00:54:59.861769219Z" level=info msg="NRI interface is disabled by configuration." Jan 28 00:54:59.863001 containerd[1472]: time="2026-01-28T00:54:59.861785525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 00:54:59.863463 containerd[1472]: time="2026-01-28T00:54:59.863398931Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 00:54:59.864470 containerd[1472]: time="2026-01-28T00:54:59.864451337Z" level=info msg="Connect containerd service" Jan 28 00:54:59.865000 containerd[1472]: time="2026-01-28T00:54:59.864981382Z" level=info msg="using legacy CRI server" Jan 28 00:54:59.865203 containerd[1472]: time="2026-01-28T00:54:59.865184025Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:54:59.873640 tar[1470]: linux-amd64/README.md Jan 28 00:54:59.882491 containerd[1472]: time="2026-01-28T00:54:59.880564795Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 00:54:59.887968 containerd[1472]: time="2026-01-28T00:54:59.887775461Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:54:59.890978 containerd[1472]: time="2026-01-28T00:54:59.889326967Z" level=info msg="Start subscribing containerd event" Jan 28 00:54:59.890978 containerd[1472]: time="2026-01-28T00:54:59.889836900Z" level=info msg="Start recovering state" Jan 28 00:54:59.890978 containerd[1472]: time="2026-01-28T00:54:59.890287085Z" level=info msg="Start event monitor" Jan 28 00:54:59.890978 containerd[1472]: time="2026-01-28T00:54:59.890479937Z" level=info msg="Start snapshots syncer" Jan 28 00:54:59.890978 containerd[1472]: time="2026-01-28T00:54:59.890593530Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:54:59.890978 containerd[1472]: time="2026-01-28T00:54:59.890636544Z" level=info msg="Start streaming server" Jan 28 00:54:59.892816 containerd[1472]: time="2026-01-28T00:54:59.892630332Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:54:59.893955 containerd[1472]: time="2026-01-28T00:54:59.893755291Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:54:59.909842 containerd[1472]: time="2026-01-28T00:54:59.905539953Z" level=info msg="containerd successfully booted in 0.170105s" Jan 28 00:54:59.913254 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:54:59.985652 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:55:02.103614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:02.110739 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:55:02.114543 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:55:02.118259 systemd[1]: Startup finished in 3.893s (kernel) + 12.902s (initrd) + 17.197s (userspace) = 33.993s. Jan 28 00:55:03.899455 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:55:03.911863 systemd[1]: Started sshd@0-10.0.0.11:22-10.0.0.1:58392.service - OpenSSH per-connection server daemon (10.0.0.1:58392). Jan 28 00:55:05.108639 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:55:05.186788 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:55:05.609557 systemd-logind[1461]: New session 1 of user core. Jan 28 00:55:05.622072 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:55:05.668759 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:55:05.710719 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:55:05.720584 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:55:05.750846 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:55:06.075105 kubelet[1544]: E0128 00:55:06.073506 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:55:06.076880 systemd[1560]: Queued start job for default target default.target. Jan 28 00:55:06.085553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:55:06.085777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:55:06.086595 systemd[1]: kubelet.service: Consumed 9.926s CPU time. Jan 28 00:55:06.088467 systemd[1560]: Created slice app.slice - User Application Slice. Jan 28 00:55:06.088538 systemd[1560]: Reached target paths.target - Paths. Jan 28 00:55:06.088555 systemd[1560]: Reached target timers.target - Timers. Jan 28 00:55:06.091684 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:55:06.115868 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:55:06.116180 systemd[1560]: Reached target sockets.target - Sockets. Jan 28 00:55:06.116197 systemd[1560]: Reached target basic.target - Basic System. Jan 28 00:55:06.116244 systemd[1560]: Reached target default.target - Main User Target. Jan 28 00:55:06.116291 systemd[1560]: Startup finished in 347ms. Jan 28 00:55:06.116610 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:55:06.127252 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:55:06.204666 systemd[1]: Started sshd@1-10.0.0.11:22-10.0.0.1:58402.service - OpenSSH per-connection server daemon (10.0.0.1:58402). Jan 28 00:55:06.276651 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 58402 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:55:06.279875 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:55:06.288776 systemd-logind[1461]: New session 2 of user core. Jan 28 00:55:06.296196 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:55:06.360973 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 28 00:55:06.380356 systemd[1]: sshd@1-10.0.0.11:22-10.0.0.1:58402.service: Deactivated successfully. Jan 28 00:55:06.382838 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 00:55:06.385443 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Jan 28 00:55:06.400490 systemd[1]: Started sshd@2-10.0.0.11:22-10.0.0.1:58414.service - OpenSSH per-connection server daemon (10.0.0.1:58414). Jan 28 00:55:06.402518 systemd-logind[1461]: Removed session 2. Jan 28 00:55:06.438541 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 58414 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:55:06.440539 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:55:06.447752 systemd-logind[1461]: New session 3 of user core. Jan 28 00:55:06.457221 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:55:06.515878 sshd[1579]: pam_unix(sshd:session): session closed for user core Jan 28 00:55:06.530856 systemd[1]: sshd@2-10.0.0.11:22-10.0.0.1:58414.service: Deactivated successfully. Jan 28 00:55:06.533700 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 00:55:06.536251 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Jan 28 00:55:06.548576 systemd[1]: Started sshd@3-10.0.0.11:22-10.0.0.1:58416.service - OpenSSH per-connection server daemon (10.0.0.1:58416). Jan 28 00:55:06.550081 systemd-logind[1461]: Removed session 3. Jan 28 00:55:06.591764 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 58416 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:55:06.594379 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:55:06.601787 systemd-logind[1461]: New session 4 of user core. Jan 28 00:55:06.611217 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:55:07.204790 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 28 00:55:07.214750 systemd[1]: sshd@3-10.0.0.11:22-10.0.0.1:58416.service: Deactivated successfully. Jan 28 00:55:07.217635 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:55:07.220624 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:55:07.235451 systemd[1]: Started sshd@4-10.0.0.11:22-10.0.0.1:58424.service - OpenSSH per-connection server daemon (10.0.0.1:58424). Jan 28 00:55:07.236735 systemd-logind[1461]: Removed session 4. Jan 28 00:55:07.273877 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 58424 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:55:07.278232 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:55:07.301968 systemd-logind[1461]: New session 5 of user core. Jan 28 00:55:07.317112 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:55:07.439423 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:55:07.440092 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:55:07.465259 sudo[1596]: pam_unix(sudo:session): session closed for user root Jan 28 00:55:07.468257 sshd[1593]: pam_unix(sshd:session): session closed for user core Jan 28 00:55:07.478195 systemd[1]: sshd@4-10.0.0.11:22-10.0.0.1:58424.service: Deactivated successfully. Jan 28 00:55:07.480857 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:55:07.483549 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:55:07.499584 systemd[1]: Started sshd@5-10.0.0.11:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). Jan 28 00:55:07.503137 systemd-logind[1461]: Removed session 5. Jan 28 00:55:07.544635 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:55:07.546833 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:55:07.554514 systemd-logind[1461]: New session 6 of user core. Jan 28 00:55:07.564212 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:55:07.631682 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:55:07.632412 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:55:07.640290 sudo[1605]: pam_unix(sudo:session): session closed for user root Jan 28 00:55:07.652051 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 00:55:07.652470 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:55:07.683119 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 00:55:07.696327 auditctl[1608]: No rules Jan 28 00:55:07.697207 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:55:07.698055 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 00:55:07.723639 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:55:07.786400 augenrules[1626]: No rules Jan 28 00:55:07.787788 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:55:07.789704 sudo[1604]: pam_unix(sudo:session): session closed for user root Jan 28 00:55:07.793126 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 28 00:55:07.806406 systemd[1]: sshd@5-10.0.0.11:22-10.0.0.1:58430.service: Deactivated successfully. Jan 28 00:55:07.809462 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:55:07.812545 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:55:07.829502 systemd[1]: Started sshd@6-10.0.0.11:22-10.0.0.1:58446.service - OpenSSH per-connection server daemon (10.0.0.1:58446). Jan 28 00:55:07.831361 systemd-logind[1461]: Removed session 6. Jan 28 00:55:07.872520 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:55:07.875594 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:55:07.883093 systemd-logind[1461]: New session 7 of user core. Jan 28 00:55:07.898205 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:55:07.963516 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:55:07.964146 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:55:10.937589 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:55:10.940439 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:55:12.420167 dockerd[1655]: time="2026-01-28T00:55:12.419711404Z" level=info msg="Starting up" Jan 28 00:55:13.081538 dockerd[1655]: time="2026-01-28T00:55:13.080984583Z" level=info msg="Loading containers: start." Jan 28 00:55:13.314087 kernel: Initializing XFRM netlink socket Jan 28 00:55:13.623535 systemd-networkd[1396]: docker0: Link UP Jan 28 00:55:13.649984 dockerd[1655]: time="2026-01-28T00:55:13.649711905Z" level=info msg="Loading containers: done." Jan 28 00:55:13.694801 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3265117123-merged.mount: Deactivated successfully. Jan 28 00:55:13.697089 dockerd[1655]: time="2026-01-28T00:55:13.697002808Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:55:13.697287 dockerd[1655]: time="2026-01-28T00:55:13.697186411Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 00:55:13.697596 dockerd[1655]: time="2026-01-28T00:55:13.697351855Z" level=info msg="Daemon has completed initialization" Jan 28 00:55:13.784300 dockerd[1655]: time="2026-01-28T00:55:13.784191614Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:55:13.787029 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:55:16.255418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:55:16.736494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:55:17.538359 containerd[1472]: time="2026-01-28T00:55:17.537654958Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 28 00:55:18.094587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:18.102389 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:55:18.526278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53666927.mount: Deactivated successfully. Jan 28 00:55:18.591996 kubelet[1812]: E0128 00:55:18.591786 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:55:18.599552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:55:18.599873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:55:18.600647 systemd[1]: kubelet.service: Consumed 1.776s CPU time. Jan 28 00:55:21.214628 containerd[1472]: time="2026-01-28T00:55:21.214274998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:21.214628 containerd[1472]: time="2026-01-28T00:55:21.214527994Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 28 00:55:21.217582 containerd[1472]: time="2026-01-28T00:55:21.217528536Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:21.221974 containerd[1472]: time="2026-01-28T00:55:21.221811991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:21.224540 containerd[1472]: time="2026-01-28T00:55:21.224444460Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.685993775s" Jan 28 00:55:21.224592 containerd[1472]: time="2026-01-28T00:55:21.224553075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 28 00:55:21.228976 containerd[1472]: time="2026-01-28T00:55:21.228823347Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 28 00:55:23.830645 containerd[1472]: time="2026-01-28T00:55:23.830248530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:23.830645 containerd[1472]: time="2026-01-28T00:55:23.830985745Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 28 00:55:23.832776 containerd[1472]: time="2026-01-28T00:55:23.832719750Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:23.837759 containerd[1472]: time="2026-01-28T00:55:23.837709742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:23.840498 containerd[1472]: time="2026-01-28T00:55:23.840426011Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.611395074s" Jan 28 00:55:23.840579 containerd[1472]: time="2026-01-28T00:55:23.840513891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 28 00:55:23.844499 containerd[1472]: time="2026-01-28T00:55:23.844464119Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 28 00:55:25.657046 containerd[1472]: time="2026-01-28T00:55:25.656668368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:25.657046 containerd[1472]: time="2026-01-28T00:55:25.657276794Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 28 00:55:25.659133 containerd[1472]: time="2026-01-28T00:55:25.659083168Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:25.663045 containerd[1472]: time="2026-01-28T00:55:25.662993906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:25.664654 containerd[1472]: time="2026-01-28T00:55:25.664599431Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.820023912s" Jan 28 00:55:25.664654 containerd[1472]: time="2026-01-28T00:55:25.664646118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 28 00:55:25.668002 containerd[1472]: time="2026-01-28T00:55:25.667950988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 28 00:55:27.806937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204966359.mount: Deactivated successfully. Jan 28 00:55:28.685144 containerd[1472]: time="2026-01-28T00:55:28.684703619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:28.685144 containerd[1472]: time="2026-01-28T00:55:28.685416155Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 28 00:55:28.687477 containerd[1472]: time="2026-01-28T00:55:28.687416400Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:28.691495 containerd[1472]: time="2026-01-28T00:55:28.691453088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:28.693577 containerd[1472]: time="2026-01-28T00:55:28.693436574Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.025309581s" Jan 28 00:55:28.693577 containerd[1472]: time="2026-01-28T00:55:28.693534821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 28 00:55:28.698512 containerd[1472]: time="2026-01-28T00:55:28.698446249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 28 00:55:28.873641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:55:28.902426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:55:29.154950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:29.160444 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:55:29.352188 kubelet[1903]: E0128 00:55:29.351691 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:55:29.356159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:55:29.356397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:55:29.497063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902547908.mount: Deactivated successfully. Jan 28 00:55:32.123333 containerd[1472]: time="2026-01-28T00:55:32.122580160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:32.123333 containerd[1472]: time="2026-01-28T00:55:32.123278986Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 28 00:55:32.127497 containerd[1472]: time="2026-01-28T00:55:32.127132632Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:32.162842 containerd[1472]: time="2026-01-28T00:55:32.161858771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:32.172295 containerd[1472]: time="2026-01-28T00:55:32.172132294Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.473528368s" Jan 28 00:55:32.172392 containerd[1472]: time="2026-01-28T00:55:32.172316469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 28 00:55:32.177674 containerd[1472]: time="2026-01-28T00:55:32.177449368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 28 00:55:32.654203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335363898.mount: Deactivated successfully. Jan 28 00:55:32.662399 containerd[1472]: time="2026-01-28T00:55:32.662210895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:32.663560 containerd[1472]: time="2026-01-28T00:55:32.663401155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 28 00:55:32.664665 containerd[1472]: time="2026-01-28T00:55:32.664571766Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:32.667788 containerd[1472]: time="2026-01-28T00:55:32.667651852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:32.669261 containerd[1472]: time="2026-01-28T00:55:32.669049305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 491.467371ms" Jan 28 00:55:32.669325 containerd[1472]: time="2026-01-28T00:55:32.669222044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 28 00:55:32.672966 containerd[1472]: time="2026-01-28T00:55:32.672820306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 28 00:55:33.194401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2550653873.mount: Deactivated successfully. Jan 28 00:55:37.802712 containerd[1472]: time="2026-01-28T00:55:37.802268300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:37.802712 containerd[1472]: time="2026-01-28T00:55:37.802881677Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 28 00:55:37.805585 containerd[1472]: time="2026-01-28T00:55:37.804290607Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:37.808391 containerd[1472]: time="2026-01-28T00:55:37.808279453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:55:37.811107 containerd[1472]: time="2026-01-28T00:55:37.810982189Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.138114645s" Jan 28 00:55:37.811232 containerd[1472]: time="2026-01-28T00:55:37.811116183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 28 00:55:39.651522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:55:39.665425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:55:40.072412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:40.080830 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:55:40.154000 update_engine[1465]: I20260128 00:55:40.152369 1465 update_attempter.cc:509] Updating boot flags... Jan 28 00:55:40.202232 kubelet[2053]: E0128 00:55:40.201874 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:55:40.206066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:55:40.206347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:55:40.301947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2069) Jan 28 00:55:40.357994 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2073) Jan 28 00:55:41.489290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:41.509748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:55:41.559004 systemd[1]: Reloading requested from client PID 2083 ('systemctl') (unit session-7.scope)... Jan 28 00:55:41.559049 systemd[1]: Reloading... Jan 28 00:55:41.817451 zram_generator::config[2125]: No configuration found. Jan 28 00:55:41.972397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:55:42.068550 systemd[1]: Reloading finished in 508 ms. Jan 28 00:55:42.149277 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:55:42.149453 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:55:42.149806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:42.160264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:55:42.360016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:42.367844 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:55:42.463069 kubelet[2170]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:55:42.463069 kubelet[2170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:55:42.463714 kubelet[2170]: I0128 00:55:42.463099 2170 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:55:43.173318 kubelet[2170]: I0128 00:55:43.173011 2170 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 00:55:43.173318 kubelet[2170]: I0128 00:55:43.173068 2170 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:55:43.173318 kubelet[2170]: I0128 00:55:43.173169 2170 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 00:55:43.173318 kubelet[2170]: I0128 00:55:43.173184 2170 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:55:43.175419 kubelet[2170]: I0128 00:55:43.174190 2170 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 00:55:43.183753 kubelet[2170]: I0128 00:55:43.183632 2170 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:55:43.185104 kubelet[2170]: E0128 00:55:43.185043 2170 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 00:55:43.231138 kubelet[2170]: E0128 00:55:43.230410 2170 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:55:43.231138 kubelet[2170]: I0128 00:55:43.230558 2170 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 00:55:43.245306 kubelet[2170]: I0128 00:55:43.245233 2170 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 00:55:43.246200 kubelet[2170]: I0128 00:55:43.246117 2170 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:55:43.246594 kubelet[2170]: I0128 00:55:43.246181 2170 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:55:43.247005 kubelet[2170]: I0128 00:55:43.246693 2170 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:55:43.247005 kubelet[2170]: I0128 00:55:43.246712 2170 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 00:55:43.247155 kubelet[2170]: I0128 00:55:43.247081 2170 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 00:55:43.252128 kubelet[2170]: I0128 00:55:43.252045 2170 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:55:43.255400 kubelet[2170]: I0128 00:55:43.255322 2170 kubelet.go:475] "Attempting to sync node with API server" Jan 28 00:55:43.255400 kubelet[2170]: I0128 00:55:43.255398 2170 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:55:43.255542 kubelet[2170]: I0128 00:55:43.255517 2170 kubelet.go:387] "Adding apiserver pod source" Jan 28 00:55:43.255720 kubelet[2170]: I0128 00:55:43.255637 2170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:55:43.256720 kubelet[2170]: E0128 00:55:43.256591 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 00:55:43.256720 kubelet[2170]: E0128 00:55:43.256590 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 00:55:43.260883 kubelet[2170]: I0128 00:55:43.260806 2170 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:55:43.262107 kubelet[2170]: I0128 00:55:43.262035 2170 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 00:55:43.262107 kubelet[2170]: I0128 00:55:43.262098 2170 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 00:55:43.262379 kubelet[2170]: W0128 00:55:43.262309 2170 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:55:43.270059 kubelet[2170]: I0128 00:55:43.269991 2170 server.go:1262] "Started kubelet" Jan 28 00:55:43.270293 kubelet[2170]: I0128 00:55:43.270224 2170 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:55:43.270967 kubelet[2170]: I0128 00:55:43.270238 2170 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:55:43.271013 kubelet[2170]: I0128 00:55:43.270994 2170 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 00:55:43.272045 kubelet[2170]: I0128 00:55:43.271585 2170 server.go:310] "Adding debug handlers to kubelet server" Jan 28 00:55:43.272693 kubelet[2170]: I0128 00:55:43.272608 2170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:55:43.272991 kubelet[2170]: I0128 00:55:43.272876 2170 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:55:43.275749 kubelet[2170]: I0128 00:55:43.275473 2170 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:55:43.278990 kubelet[2170]: E0128 00:55:43.278817 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:55:43.279052 kubelet[2170]: I0128 00:55:43.279003 2170 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 00:55:43.279052 kubelet[2170]: E0128 00:55:43.279027 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="200ms" Jan 28 00:55:43.279594 kubelet[2170]: I0128 00:55:43.279158 2170 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 00:55:43.279594 kubelet[2170]: I0128 00:55:43.279280 2170 reconciler.go:29] "Reconciler: start to sync state" Jan 28 00:55:43.279594 kubelet[2170]: E0128 00:55:43.277500 2170 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.11:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ebf07abb8069d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 00:55:43.269824157 +0000 UTC m=+0.879855006,LastTimestamp:2026-01-28 00:55:43.269824157 +0000 UTC m=+0.879855006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 00:55:43.279594 kubelet[2170]: E0128 00:55:43.279515 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 00:55:43.280660 kubelet[2170]: I0128 00:55:43.280590 2170 factory.go:223] Registration of the systemd container factory successfully Jan 28 00:55:43.280794 kubelet[2170]: I0128 00:55:43.280765 2170 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:55:43.280794 kubelet[2170]: E0128 00:55:43.280776 2170 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:55:43.282159 kubelet[2170]: I0128 00:55:43.282089 2170 factory.go:223] Registration of the containerd container factory successfully Jan 28 00:55:43.315580 kubelet[2170]: I0128 00:55:43.315540 2170 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:55:43.315580 kubelet[2170]: I0128 00:55:43.315560 2170 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:55:43.315870 kubelet[2170]: I0128 00:55:43.315598 2170 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:55:43.322533 kubelet[2170]: I0128 00:55:43.322243 2170 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 00:55:43.322533 kubelet[2170]: I0128 00:55:43.322320 2170 policy_none.go:49] "None policy: Start" Jan 28 00:55:43.322533 kubelet[2170]: I0128 00:55:43.322464 2170 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 00:55:43.322533 kubelet[2170]: I0128 00:55:43.322536 2170 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 00:55:43.325524 kubelet[2170]: I0128 00:55:43.325506 2170 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 00:55:43.325813 kubelet[2170]: I0128 00:55:43.325797 2170 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 00:55:43.326054 kubelet[2170]: I0128 00:55:43.326003 2170 policy_none.go:47] "Start" Jan 28 00:55:43.326197 kubelet[2170]: I0128 00:55:43.326182 2170 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 00:55:43.326378 kubelet[2170]: E0128 00:55:43.326357 2170 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:55:43.328829 kubelet[2170]: E0128 00:55:43.328807 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 00:55:43.335670 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:55:43.348377 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:55:43.384795 kubelet[2170]: E0128 00:55:43.384221 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:55:43.391676 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:55:43.414181 kubelet[2170]: E0128 00:55:43.413724 2170 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 00:55:43.415110 kubelet[2170]: I0128 00:55:43.414364 2170 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:55:43.415110 kubelet[2170]: I0128 00:55:43.414511 2170 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:55:43.415110 kubelet[2170]: I0128 00:55:43.415036 2170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:55:43.418871 kubelet[2170]: E0128 00:55:43.418837 2170 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:55:43.419111 kubelet[2170]: E0128 00:55:43.419077 2170 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 00:55:43.440045 systemd[1]: Created slice kubepods-burstable-pod42053ba272c662d83778a147e2822c59.slice - libcontainer container kubepods-burstable-pod42053ba272c662d83778a147e2822c59.slice. Jan 28 00:55:43.459605 kubelet[2170]: E0128 00:55:43.459473 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:43.464045 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 28 00:55:43.466504 kubelet[2170]: E0128 00:55:43.466465 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:43.468399 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 28 00:55:43.470828 kubelet[2170]: E0128 00:55:43.470784 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:43.480638 kubelet[2170]: E0128 00:55:43.480540 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="400ms" Jan 28 00:55:43.485005 kubelet[2170]: I0128 00:55:43.484972 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42053ba272c662d83778a147e2822c59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42053ba272c662d83778a147e2822c59\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:43.485005 kubelet[2170]: I0128 00:55:43.485011 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:43.485130 kubelet[2170]: I0128 00:55:43.485058 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:43.485130 kubelet[2170]: I0128 00:55:43.485074 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:43.485130 kubelet[2170]: I0128 00:55:43.485091 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:43.485130 kubelet[2170]: I0128 00:55:43.485105 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:43.485130 kubelet[2170]: I0128 00:55:43.485119 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:43.485252 kubelet[2170]: I0128 00:55:43.485158 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42053ba272c662d83778a147e2822c59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42053ba272c662d83778a147e2822c59\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:43.485278 kubelet[2170]: I0128 00:55:43.485260 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42053ba272c662d83778a147e2822c59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42053ba272c662d83778a147e2822c59\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:43.521108 kubelet[2170]: I0128 00:55:43.520838 2170 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:55:43.524847 kubelet[2170]: E0128 00:55:43.524719 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 28 00:55:43.746149 kubelet[2170]: I0128 00:55:43.744463 2170 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:55:43.746149 kubelet[2170]: E0128 00:55:43.745646 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 28 00:55:43.765472 kubelet[2170]: E0128 00:55:43.765309 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:43.768483 containerd[1472]: time="2026-01-28T00:55:43.768354327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42053ba272c662d83778a147e2822c59,Namespace:kube-system,Attempt:0,}" Jan 28 00:55:43.770732 kubelet[2170]: E0128 00:55:43.770577 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:43.771550 containerd[1472]: time="2026-01-28T00:55:43.771437556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 28 00:55:43.774012 kubelet[2170]: E0128 00:55:43.773927 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:43.774964 containerd[1472]: time="2026-01-28T00:55:43.774786306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 28 00:55:43.881736 kubelet[2170]: E0128 00:55:43.881624 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="800ms" Jan 28 00:55:44.148779 kubelet[2170]: I0128 00:55:44.148677 2170 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:55:44.149575 kubelet[2170]: E0128 00:55:44.149420 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 28 00:55:44.172621 kubelet[2170]: E0128 00:55:44.172540 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 00:55:44.228771 kubelet[2170]: E0128 00:55:44.228518 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 00:55:44.250168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633271331.mount: Deactivated successfully. Jan 28 00:55:44.256552 containerd[1472]: time="2026-01-28T00:55:44.256477899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:55:44.259007 containerd[1472]: time="2026-01-28T00:55:44.258848708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:55:44.260248 containerd[1472]: time="2026-01-28T00:55:44.260147454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:55:44.261522 containerd[1472]: time="2026-01-28T00:55:44.261457144Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:55:44.262971 containerd[1472]: time="2026-01-28T00:55:44.262788279Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:55:44.264253 containerd[1472]: time="2026-01-28T00:55:44.264210193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 00:55:44.265396 containerd[1472]: time="2026-01-28T00:55:44.265331528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:55:44.268222 containerd[1472]: time="2026-01-28T00:55:44.268165313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:55:44.270214 containerd[1472]: time="2026-01-28T00:55:44.270146244Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.251298ms" Jan 28 00:55:44.270979 containerd[1472]: time="2026-01-28T00:55:44.270841063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.096007ms" Jan 28 00:55:44.272100 containerd[1472]: time="2026-01-28T00:55:44.272057757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.527634ms" Jan 28 00:55:44.555154 kubelet[2170]: E0128 00:55:44.552190 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 00:55:44.556327 containerd[1472]: time="2026-01-28T00:55:44.554563275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:55:44.556327 containerd[1472]: time="2026-01-28T00:55:44.555709761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:55:44.556327 containerd[1472]: time="2026-01-28T00:55:44.556201068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:44.559585 containerd[1472]: time="2026-01-28T00:55:44.559202843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:44.565458 containerd[1472]: time="2026-01-28T00:55:44.565254780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:55:44.565620 containerd[1472]: time="2026-01-28T00:55:44.565467538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:55:44.565813 containerd[1472]: time="2026-01-28T00:55:44.565579965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:44.568949 containerd[1472]: time="2026-01-28T00:55:44.566052170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:44.576473 kubelet[2170]: E0128 00:55:44.576372 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 00:55:44.607497 containerd[1472]: time="2026-01-28T00:55:44.605409338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:55:44.607497 containerd[1472]: time="2026-01-28T00:55:44.605481552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:55:44.607497 containerd[1472]: time="2026-01-28T00:55:44.605493135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:44.607497 containerd[1472]: time="2026-01-28T00:55:44.605828803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:44.668131 systemd[1]: Started cri-containerd-2a076ed3ee920a9d0f56ef819b53dfceef026537c600d7709a1244ea06563f8f.scope - libcontainer container 2a076ed3ee920a9d0f56ef819b53dfceef026537c600d7709a1244ea06563f8f. Jan 28 00:55:44.683129 kubelet[2170]: E0128 00:55:44.683000 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="1.6s" Jan 28 00:55:44.692171 systemd[1]: Started cri-containerd-4a356cad3bdfd1f8961fefd22e81156eb9ffbac684452d6b63a9140d89b81094.scope - libcontainer container 4a356cad3bdfd1f8961fefd22e81156eb9ffbac684452d6b63a9140d89b81094. Jan 28 00:55:44.703516 systemd[1]: Started cri-containerd-574eb4be03cd0700e695abf6e8541d0257bcc398214ca12c0ebbcfca0498e6cf.scope - libcontainer container 574eb4be03cd0700e695abf6e8541d0257bcc398214ca12c0ebbcfca0498e6cf. Jan 28 00:55:44.862311 containerd[1472]: time="2026-01-28T00:55:44.862146606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42053ba272c662d83778a147e2822c59,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a076ed3ee920a9d0f56ef819b53dfceef026537c600d7709a1244ea06563f8f\"" Jan 28 00:55:44.872429 kubelet[2170]: E0128 00:55:44.872344 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:44.883581 containerd[1472]: time="2026-01-28T00:55:44.883471187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a356cad3bdfd1f8961fefd22e81156eb9ffbac684452d6b63a9140d89b81094\"" Jan 28 00:55:44.887694 kubelet[2170]: E0128 00:55:44.887564 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:44.893848 containerd[1472]: time="2026-01-28T00:55:44.893762939Z" level=info msg="CreateContainer within sandbox \"2a076ed3ee920a9d0f56ef819b53dfceef026537c600d7709a1244ea06563f8f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:55:44.895461 containerd[1472]: time="2026-01-28T00:55:44.895435675Z" level=info msg="CreateContainer within sandbox \"4a356cad3bdfd1f8961fefd22e81156eb9ffbac684452d6b63a9140d89b81094\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:55:44.899774 containerd[1472]: time="2026-01-28T00:55:44.899681794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"574eb4be03cd0700e695abf6e8541d0257bcc398214ca12c0ebbcfca0498e6cf\"" Jan 28 00:55:44.900761 kubelet[2170]: E0128 00:55:44.900684 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:44.905389 containerd[1472]: time="2026-01-28T00:55:44.905251365Z" level=info msg="CreateContainer within sandbox \"574eb4be03cd0700e695abf6e8541d0257bcc398214ca12c0ebbcfca0498e6cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:55:44.925713 containerd[1472]: time="2026-01-28T00:55:44.925529954Z" level=info msg="CreateContainer within sandbox \"2a076ed3ee920a9d0f56ef819b53dfceef026537c600d7709a1244ea06563f8f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1179465e2a0b94190060e7cc62d6b0f13e811edb1a0bbf588e08feac4740fd8d\"" Jan 28 00:55:44.927714 containerd[1472]: time="2026-01-28T00:55:44.927689759Z" level=info msg="StartContainer for \"1179465e2a0b94190060e7cc62d6b0f13e811edb1a0bbf588e08feac4740fd8d\"" Jan 28 00:55:44.934987 containerd[1472]: time="2026-01-28T00:55:44.934871574Z" level=info msg="CreateContainer within sandbox \"4a356cad3bdfd1f8961fefd22e81156eb9ffbac684452d6b63a9140d89b81094\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea788767b8d3f8d44af669ad59107e3b15c92e7c48bc3624b1a426ab172e8ea2\"" Jan 28 00:55:44.936750 containerd[1472]: time="2026-01-28T00:55:44.935702318Z" level=info msg="StartContainer for \"ea788767b8d3f8d44af669ad59107e3b15c92e7c48bc3624b1a426ab172e8ea2\"" Jan 28 00:55:44.944871 containerd[1472]: time="2026-01-28T00:55:44.944843875Z" level=info msg="CreateContainer within sandbox \"574eb4be03cd0700e695abf6e8541d0257bcc398214ca12c0ebbcfca0498e6cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7b9aa5eb3ec0a80323b66545c59b251f49f99e9d79721dc8cf8f187a57c8c8b\"" Jan 28 00:55:44.945803 containerd[1472]: time="2026-01-28T00:55:44.945781962Z" level=info msg="StartContainer for \"a7b9aa5eb3ec0a80323b66545c59b251f49f99e9d79721dc8cf8f187a57c8c8b\"" Jan 28 00:55:44.951275 kubelet[2170]: I0128 00:55:44.951253 2170 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:55:44.951690 kubelet[2170]: E0128 00:55:44.951664 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 28 00:55:44.977048 systemd[1]: Started cri-containerd-1179465e2a0b94190060e7cc62d6b0f13e811edb1a0bbf588e08feac4740fd8d.scope - libcontainer container 1179465e2a0b94190060e7cc62d6b0f13e811edb1a0bbf588e08feac4740fd8d. Jan 28 00:55:44.988064 systemd[1]: Started cri-containerd-a7b9aa5eb3ec0a80323b66545c59b251f49f99e9d79721dc8cf8f187a57c8c8b.scope - libcontainer container a7b9aa5eb3ec0a80323b66545c59b251f49f99e9d79721dc8cf8f187a57c8c8b. Jan 28 00:55:44.991843 systemd[1]: Started cri-containerd-ea788767b8d3f8d44af669ad59107e3b15c92e7c48bc3624b1a426ab172e8ea2.scope - libcontainer container ea788767b8d3f8d44af669ad59107e3b15c92e7c48bc3624b1a426ab172e8ea2. Jan 28 00:55:45.119154 containerd[1472]: time="2026-01-28T00:55:45.118589339Z" level=info msg="StartContainer for \"1179465e2a0b94190060e7cc62d6b0f13e811edb1a0bbf588e08feac4740fd8d\" returns successfully" Jan 28 00:55:45.133955 containerd[1472]: time="2026-01-28T00:55:45.133849823Z" level=info msg="StartContainer for \"ea788767b8d3f8d44af669ad59107e3b15c92e7c48bc3624b1a426ab172e8ea2\" returns successfully" Jan 28 00:55:45.144966 containerd[1472]: time="2026-01-28T00:55:45.144935311Z" level=info msg="StartContainer for \"a7b9aa5eb3ec0a80323b66545c59b251f49f99e9d79721dc8cf8f187a57c8c8b\" returns successfully" Jan 28 00:55:45.390951 kubelet[2170]: E0128 00:55:45.382819 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:45.390951 kubelet[2170]: E0128 00:55:45.385753 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:45.390951 kubelet[2170]: E0128 00:55:45.386827 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:45.390951 kubelet[2170]: E0128 00:55:45.389979 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:45.393875 kubelet[2170]: E0128 00:55:45.393810 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:45.394145 kubelet[2170]: E0128 00:55:45.394091 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:46.414484 kubelet[2170]: E0128 00:55:46.413874 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:46.414484 kubelet[2170]: E0128 00:55:46.414616 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:46.414484 kubelet[2170]: E0128 00:55:46.414224 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:46.414484 kubelet[2170]: E0128 00:55:46.414744 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:46.586752 kubelet[2170]: I0128 00:55:46.586346 2170 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:55:47.159687 kubelet[2170]: E0128 00:55:47.159199 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:55:47.159687 kubelet[2170]: E0128 00:55:47.159671 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:47.935482 kubelet[2170]: E0128 00:55:47.934944 2170 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 00:55:48.033429 kubelet[2170]: I0128 00:55:48.033312 2170 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 00:55:48.033429 kubelet[2170]: E0128 00:55:48.033417 2170 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 00:55:48.079509 kubelet[2170]: I0128 00:55:48.079393 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:48.088245 kubelet[2170]: E0128 00:55:48.088149 2170 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:48.088245 kubelet[2170]: I0128 00:55:48.088212 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:48.091274 kubelet[2170]: E0128 00:55:48.091232 2170 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:48.091991 kubelet[2170]: I0128 00:55:48.091398 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:48.093274 kubelet[2170]: E0128 00:55:48.093206 2170 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:48.262545 kubelet[2170]: I0128 00:55:48.261268 2170 apiserver.go:52] "Watching apiserver" Jan 28 00:55:48.279372 kubelet[2170]: I0128 00:55:48.279319 2170 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 00:55:49.191528 kubelet[2170]: I0128 00:55:49.190156 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:49.198354 kubelet[2170]: E0128 00:55:49.198238 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:49.436328 kubelet[2170]: E0128 00:55:49.436274 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:50.461214 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Jan 28 00:55:50.461288 systemd[1]: Reloading... Jan 28 00:55:50.572725 kubelet[2170]: I0128 00:55:50.571687 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:50.608121 kubelet[2170]: E0128 00:55:50.606066 2170 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:50.663995 zram_generator::config[2500]: No configuration found. Jan 28 00:55:51.056015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:55:51.188945 systemd[1]: Reloading finished in 727 ms. Jan 28 00:55:51.280523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:55:51.311629 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:55:51.312188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:51.312513 systemd[1]: kubelet.service: Consumed 2.546s CPU time, 128.6M memory peak, 0B memory swap peak. Jan 28 00:55:51.324664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:55:51.645246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:55:51.652054 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:55:51.738577 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:55:51.738577 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:55:51.739046 kubelet[2539]: I0128 00:55:51.738696 2539 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:55:51.746415 kubelet[2539]: I0128 00:55:51.746362 2539 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 00:55:51.746415 kubelet[2539]: I0128 00:55:51.746399 2539 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:55:51.746545 kubelet[2539]: I0128 00:55:51.746427 2539 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 00:55:51.746545 kubelet[2539]: I0128 00:55:51.746439 2539 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:55:51.746683 kubelet[2539]: I0128 00:55:51.746623 2539 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 00:55:51.747783 kubelet[2539]: I0128 00:55:51.747733 2539 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 00:55:51.751083 kubelet[2539]: I0128 00:55:51.750994 2539 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:55:51.755212 kubelet[2539]: E0128 00:55:51.755116 2539 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:55:51.755212 kubelet[2539]: I0128 00:55:51.755157 2539 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 00:55:51.765818 kubelet[2539]: I0128 00:55:51.765783 2539 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 00:55:51.767164 kubelet[2539]: I0128 00:55:51.766422 2539 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:55:51.767164 kubelet[2539]: I0128 00:55:51.766470 2539 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:55:51.767164 kubelet[2539]: I0128 00:55:51.766700 2539 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:55:51.767164 kubelet[2539]: I0128 00:55:51.766713 2539 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 00:55:51.767431 kubelet[2539]: I0128 00:55:51.766746 2539 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 00:55:51.768011 kubelet[2539]: I0128 00:55:51.767968 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:55:51.768225 kubelet[2539]: I0128 00:55:51.768182 2539 kubelet.go:475] "Attempting to sync node with API server" Jan 28 00:55:51.768225 kubelet[2539]: I0128 00:55:51.768224 2539 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:55:51.768281 kubelet[2539]: I0128 00:55:51.768257 2539 kubelet.go:387] "Adding apiserver pod source" Jan 28 00:55:51.768305 kubelet[2539]: I0128 00:55:51.768282 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:55:51.771301 kubelet[2539]: I0128 00:55:51.771065 2539 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:55:51.771690 kubelet[2539]: I0128 00:55:51.771606 2539 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 00:55:51.771690 kubelet[2539]: I0128 00:55:51.771671 2539 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 00:55:51.779285 kubelet[2539]: I0128 00:55:51.779076 2539 server.go:1262] "Started kubelet" Jan 28 00:55:51.780949 kubelet[2539]: I0128 00:55:51.780665 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:55:51.782685 kubelet[2539]: I0128 00:55:51.780877 2539 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:55:51.783665 kubelet[2539]: I0128 00:55:51.783621 2539 server.go:310] "Adding debug handlers to kubelet server" Jan 28 00:55:51.788008 kubelet[2539]: E0128 00:55:51.787972 2539 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:55:51.789681 kubelet[2539]: I0128 00:55:51.789624 2539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:55:51.793748 kubelet[2539]: I0128 00:55:51.792710 2539 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 00:55:51.793748 kubelet[2539]: I0128 00:55:51.793169 2539 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:55:51.793748 kubelet[2539]: I0128 00:55:51.793395 2539 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 00:55:51.793748 kubelet[2539]: I0128 00:55:51.793423 2539 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 00:55:51.793748 kubelet[2539]: I0128 00:55:51.793636 2539 reconciler.go:29] "Reconciler: start to sync state" Jan 28 00:55:51.793948 kubelet[2539]: I0128 00:55:51.793772 2539 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:55:51.805955 kubelet[2539]: I0128 00:55:51.805198 2539 factory.go:223] Registration of the systemd container factory successfully Jan 28 00:55:51.805955 kubelet[2539]: I0128 00:55:51.805632 2539 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:55:51.841276 kubelet[2539]: I0128 00:55:51.840954 2539 factory.go:223] Registration of the containerd container factory successfully Jan 28 00:55:51.861850 kubelet[2539]: I0128 00:55:51.861638 2539 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 00:55:51.905376 kubelet[2539]: I0128 00:55:51.902467 2539 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 00:55:51.905376 kubelet[2539]: I0128 00:55:51.902654 2539 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 00:55:51.905376 kubelet[2539]: I0128 00:55:51.902767 2539 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 00:55:51.908712 kubelet[2539]: E0128 00:55:51.902976 2539 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:55:52.009375 kubelet[2539]: E0128 00:55:52.008447 2539 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 00:55:52.034607 kubelet[2539]: I0128 00:55:52.033854 2539 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:55:52.034607 kubelet[2539]: I0128 00:55:52.033876 2539 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:55:52.034607 kubelet[2539]: I0128 00:55:52.034148 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:55:52.034607 kubelet[2539]: I0128 00:55:52.034743 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:55:52.034607 kubelet[2539]: I0128 00:55:52.034756 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:55:52.034607 kubelet[2539]: I0128 00:55:52.034776 2539 policy_none.go:49] "None policy: Start" Jan 28 00:55:52.036695 kubelet[2539]: I0128 00:55:52.036680 2539 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 00:55:52.036843 kubelet[2539]: I0128 00:55:52.036783 2539 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 00:55:52.037972 kubelet[2539]: I0128 00:55:52.036986 2539 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 28 00:55:52.037972 kubelet[2539]: I0128 00:55:52.037002 2539 policy_none.go:47] "Start" Jan 28 00:55:52.046131 kubelet[2539]: E0128 00:55:52.046082 2539 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 00:55:52.046577 kubelet[2539]: I0128 00:55:52.046521 2539 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:55:52.046613 kubelet[2539]: I0128 00:55:52.046584 2539 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:55:52.048772 kubelet[2539]: I0128 00:55:52.047855 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:55:52.052202 kubelet[2539]: E0128 00:55:52.050209 2539 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:55:52.180374 kubelet[2539]: I0128 00:55:52.177805 2539 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:55:52.188880 kubelet[2539]: I0128 00:55:52.188844 2539 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 00:55:52.189521 kubelet[2539]: I0128 00:55:52.189208 2539 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 00:55:52.210877 kubelet[2539]: I0128 00:55:52.210759 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:52.211483 kubelet[2539]: I0128 00:55:52.211256 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:52.211483 kubelet[2539]: I0128 00:55:52.211346 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:52.221310 kubelet[2539]: E0128 00:55:52.221134 2539 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:52.224487 kubelet[2539]: E0128 00:55:52.224464 2539 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:52.450415 kubelet[2539]: I0128 00:55:52.447537 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:52.450415 kubelet[2539]: I0128 00:55:52.447721 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:52.450415 kubelet[2539]: I0128 00:55:52.448052 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:52.450415 kubelet[2539]: I0128 00:55:52.448072 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:52.450415 kubelet[2539]: I0128 00:55:52.448214 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:52.451510 kubelet[2539]: I0128 00:55:52.448229 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42053ba272c662d83778a147e2822c59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42053ba272c662d83778a147e2822c59\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:52.451510 kubelet[2539]: I0128 00:55:52.448269 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:55:52.451510 kubelet[2539]: I0128 00:55:52.448282 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42053ba272c662d83778a147e2822c59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42053ba272c662d83778a147e2822c59\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:52.451510 kubelet[2539]: I0128 00:55:52.448295 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42053ba272c662d83778a147e2822c59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42053ba272c662d83778a147e2822c59\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:52.776569 kubelet[2539]: I0128 00:55:52.772834 2539 apiserver.go:52] "Watching apiserver" Jan 28 00:55:52.857498 kubelet[2539]: E0128 00:55:52.857095 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:52.860457 kubelet[2539]: E0128 00:55:52.860374 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:52.860871 kubelet[2539]: E0128 00:55:52.860632 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:52.899959 kubelet[2539]: I0128 00:55:52.896004 2539 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 00:55:52.906290 kubelet[2539]: I0128 00:55:52.906217 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.906186492 podStartE2EDuration="2.906186492s" podCreationTimestamp="2026-01-28 00:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:55:52.896567387 +0000 UTC m=+1.237380555" watchObservedRunningTime="2026-01-28 00:55:52.906186492 +0000 UTC m=+1.246999640" Jan 28 00:55:52.906433 kubelet[2539]: I0128 00:55:52.906334 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.906329354 podStartE2EDuration="3.906329354s" podCreationTimestamp="2026-01-28 00:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:55:52.905071594 +0000 UTC m=+1.245884732" watchObservedRunningTime="2026-01-28 00:55:52.906329354 +0000 UTC m=+1.247142491" Jan 28 00:55:52.913310 kubelet[2539]: I0128 00:55:52.913113 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.913104022 podStartE2EDuration="913.104022ms" podCreationTimestamp="2026-01-28 00:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:55:52.912330925 +0000 UTC m=+1.253144063" watchObservedRunningTime="2026-01-28 00:55:52.913104022 +0000 UTC m=+1.253917160" Jan 28 00:55:52.953526 kubelet[2539]: E0128 00:55:52.953431 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:52.953642 kubelet[2539]: I0128 00:55:52.953454 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:52.953864 kubelet[2539]: I0128 00:55:52.953458 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:52.963323 kubelet[2539]: E0128 00:55:52.963272 2539 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 00:55:52.963594 kubelet[2539]: E0128 00:55:52.963497 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:52.967000 kubelet[2539]: E0128 00:55:52.966838 2539 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 00:55:52.967132 kubelet[2539]: E0128 00:55:52.967042 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:55.900440 kubelet[2539]: E0128 00:55:55.894640 2539 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.972s" Jan 28 00:55:55.900440 kubelet[2539]: E0128 00:55:55.900413 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:55.902296 kubelet[2539]: E0128 00:55:55.901126 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:56.219065 kubelet[2539]: I0128 00:55:56.218263 2539 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:55:56.219523 kubelet[2539]: I0128 00:55:56.219425 2539 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:55:56.219552 containerd[1472]: time="2026-01-28T00:55:56.219115602Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:55:56.876990 systemd[1]: Created slice kubepods-besteffort-pod6453731a_ba42_43fd_8cea_ab4acd48a02e.slice - libcontainer container kubepods-besteffort-pod6453731a_ba42_43fd_8cea_ab4acd48a02e.slice. Jan 28 00:55:56.913514 kubelet[2539]: I0128 00:55:56.913199 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6453731a-ba42-43fd-8cea-ab4acd48a02e-lib-modules\") pod \"kube-proxy-lbq7f\" (UID: \"6453731a-ba42-43fd-8cea-ab4acd48a02e\") " pod="kube-system/kube-proxy-lbq7f" Jan 28 00:55:56.913514 kubelet[2539]: I0128 00:55:56.913257 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6453731a-ba42-43fd-8cea-ab4acd48a02e-kube-proxy\") pod \"kube-proxy-lbq7f\" (UID: \"6453731a-ba42-43fd-8cea-ab4acd48a02e\") " pod="kube-system/kube-proxy-lbq7f" Jan 28 00:55:56.913514 kubelet[2539]: I0128 00:55:56.913290 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6453731a-ba42-43fd-8cea-ab4acd48a02e-xtables-lock\") pod \"kube-proxy-lbq7f\" (UID: \"6453731a-ba42-43fd-8cea-ab4acd48a02e\") " pod="kube-system/kube-proxy-lbq7f" Jan 28 00:55:56.913514 kubelet[2539]: I0128 00:55:56.913411 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkl7t\" (UniqueName: \"kubernetes.io/projected/6453731a-ba42-43fd-8cea-ab4acd48a02e-kube-api-access-qkl7t\") pod \"kube-proxy-lbq7f\" (UID: \"6453731a-ba42-43fd-8cea-ab4acd48a02e\") " pod="kube-system/kube-proxy-lbq7f" Jan 28 00:55:57.020657 kubelet[2539]: E0128 00:55:57.020518 2539 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 28 00:55:57.020657 kubelet[2539]: E0128 00:55:57.020591 2539 projected.go:196] Error preparing data for projected volume kube-api-access-qkl7t for pod kube-system/kube-proxy-lbq7f: configmap "kube-root-ca.crt" not found Jan 28 00:55:57.021231 kubelet[2539]: E0128 00:55:57.020783 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6453731a-ba42-43fd-8cea-ab4acd48a02e-kube-api-access-qkl7t podName:6453731a-ba42-43fd-8cea-ab4acd48a02e nodeName:}" failed. No retries permitted until 2026-01-28 00:55:57.520702953 +0000 UTC m=+5.861516091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qkl7t" (UniqueName: "kubernetes.io/projected/6453731a-ba42-43fd-8cea-ab4acd48a02e-kube-api-access-qkl7t") pod "kube-proxy-lbq7f" (UID: "6453731a-ba42-43fd-8cea-ab4acd48a02e") : configmap "kube-root-ca.crt" not found Jan 28 00:55:57.386144 kubelet[2539]: E0128 00:55:57.386075 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:57.413765 systemd[1]: Created slice kubepods-besteffort-podaf518a7b_ce85_4477_8c12_1c394629cbd5.slice - libcontainer container kubepods-besteffort-podaf518a7b_ce85_4477_8c12_1c394629cbd5.slice. Jan 28 00:55:57.417260 kubelet[2539]: I0128 00:55:57.417199 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af518a7b-ce85-4477-8c12-1c394629cbd5-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-c67sh\" (UID: \"af518a7b-ce85-4477-8c12-1c394629cbd5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-c67sh" Jan 28 00:55:57.417260 kubelet[2539]: I0128 00:55:57.417251 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snsx7\" (UniqueName: \"kubernetes.io/projected/af518a7b-ce85-4477-8c12-1c394629cbd5-kube-api-access-snsx7\") pod \"tigera-operator-65cdcdfd6d-c67sh\" (UID: \"af518a7b-ce85-4477-8c12-1c394629cbd5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-c67sh" Jan 28 00:55:57.721706 containerd[1472]: time="2026-01-28T00:55:57.721528421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-c67sh,Uid:af518a7b-ce85-4477-8c12-1c394629cbd5,Namespace:tigera-operator,Attempt:0,}" Jan 28 00:55:57.764805 containerd[1472]: time="2026-01-28T00:55:57.764427863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:55:57.764805 containerd[1472]: time="2026-01-28T00:55:57.764606322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:55:57.764805 containerd[1472]: time="2026-01-28T00:55:57.764618237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:57.764805 containerd[1472]: time="2026-01-28T00:55:57.764748838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:57.808273 kubelet[2539]: E0128 00:55:57.808204 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:57.808929 containerd[1472]: time="2026-01-28T00:55:57.808793026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbq7f,Uid:6453731a-ba42-43fd-8cea-ab4acd48a02e,Namespace:kube-system,Attempt:0,}" Jan 28 00:55:57.887077 containerd[1472]: time="2026-01-28T00:55:57.886598946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:55:57.887077 containerd[1472]: time="2026-01-28T00:55:57.886676041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:55:57.887077 containerd[1472]: time="2026-01-28T00:55:57.886692604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:57.887077 containerd[1472]: time="2026-01-28T00:55:57.886802946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:55:57.905541 systemd[1]: Started cri-containerd-14f226ed47cdf385e003f4298381a93f139a9999f9538506b65e65ca944de1aa.scope - libcontainer container 14f226ed47cdf385e003f4298381a93f139a9999f9538506b65e65ca944de1aa. Jan 28 00:55:57.920328 kubelet[2539]: E0128 00:55:57.920240 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:57.984196 systemd[1]: Started cri-containerd-d8fb74544b07c952db17d4638cffe0589a7129ee62cf76c94148371ba5f37bed.scope - libcontainer container d8fb74544b07c952db17d4638cffe0589a7129ee62cf76c94148371ba5f37bed. Jan 28 00:55:58.014543 containerd[1472]: time="2026-01-28T00:55:58.013845308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-c67sh,Uid:af518a7b-ce85-4477-8c12-1c394629cbd5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"14f226ed47cdf385e003f4298381a93f139a9999f9538506b65e65ca944de1aa\"" Jan 28 00:55:58.021948 containerd[1472]: time="2026-01-28T00:55:58.021806251Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 00:55:58.040865 containerd[1472]: time="2026-01-28T00:55:58.040805448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbq7f,Uid:6453731a-ba42-43fd-8cea-ab4acd48a02e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8fb74544b07c952db17d4638cffe0589a7129ee62cf76c94148371ba5f37bed\"" Jan 28 00:55:58.042818 kubelet[2539]: E0128 00:55:58.042747 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:58.053461 containerd[1472]: time="2026-01-28T00:55:58.053365723Z" level=info msg="CreateContainer within sandbox \"d8fb74544b07c952db17d4638cffe0589a7129ee62cf76c94148371ba5f37bed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:55:58.078642 containerd[1472]: time="2026-01-28T00:55:58.078484938Z" level=info msg="CreateContainer within sandbox \"d8fb74544b07c952db17d4638cffe0589a7129ee62cf76c94148371ba5f37bed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6cbd4e2e32f4e345e3f60c8e87edeefc0abacfb61f53a3fcdea7439604667419\"" Jan 28 00:55:58.079690 containerd[1472]: time="2026-01-28T00:55:58.079642573Z" level=info msg="StartContainer for \"6cbd4e2e32f4e345e3f60c8e87edeefc0abacfb61f53a3fcdea7439604667419\"" Jan 28 00:55:58.142129 systemd[1]: Started cri-containerd-6cbd4e2e32f4e345e3f60c8e87edeefc0abacfb61f53a3fcdea7439604667419.scope - libcontainer container 6cbd4e2e32f4e345e3f60c8e87edeefc0abacfb61f53a3fcdea7439604667419. Jan 28 00:55:58.187408 containerd[1472]: time="2026-01-28T00:55:58.187186001Z" level=info msg="StartContainer for \"6cbd4e2e32f4e345e3f60c8e87edeefc0abacfb61f53a3fcdea7439604667419\" returns successfully" Jan 28 00:55:58.675965 kubelet[2539]: E0128 00:55:58.675809 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:58.928239 kubelet[2539]: E0128 00:55:58.926559 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:58.930198 kubelet[2539]: E0128 00:55:58.930147 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:58.950052 kubelet[2539]: I0128 00:55:58.949984 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lbq7f" podStartSLOduration=2.949842313 podStartE2EDuration="2.949842313s" podCreationTimestamp="2026-01-28 00:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:55:58.938703147 +0000 UTC m=+7.279516285" watchObservedRunningTime="2026-01-28 00:55:58.949842313 +0000 UTC m=+7.290655471" Jan 28 00:55:59.160470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73540785.mount: Deactivated successfully. Jan 28 00:55:59.468498 kubelet[2539]: E0128 00:55:59.467563 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:55:59.933257 kubelet[2539]: E0128 00:55:59.932709 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:02.589809 containerd[1472]: time="2026-01-28T00:56:02.589643133Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:02.591781 containerd[1472]: time="2026-01-28T00:56:02.590701966Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 00:56:02.591819 containerd[1472]: time="2026-01-28T00:56:02.591775459Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:02.595078 containerd[1472]: time="2026-01-28T00:56:02.595034832Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:02.595936 containerd[1472]: time="2026-01-28T00:56:02.595831283Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.573994622s" Jan 28 00:56:02.595936 containerd[1472]: time="2026-01-28T00:56:02.595878909Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 00:56:02.608185 containerd[1472]: time="2026-01-28T00:56:02.608076088Z" level=info msg="CreateContainer within sandbox \"14f226ed47cdf385e003f4298381a93f139a9999f9538506b65e65ca944de1aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 00:56:02.627514 containerd[1472]: time="2026-01-28T00:56:02.627409682Z" level=info msg="CreateContainer within sandbox \"14f226ed47cdf385e003f4298381a93f139a9999f9538506b65e65ca944de1aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ea2f1dec7cc530aacc4da69889365e193b5abe554d7c39071c3472812eb1101c\"" Jan 28 00:56:02.629244 containerd[1472]: time="2026-01-28T00:56:02.629192002Z" level=info msg="StartContainer for \"ea2f1dec7cc530aacc4da69889365e193b5abe554d7c39071c3472812eb1101c\"" Jan 28 00:56:02.679163 systemd[1]: Started cri-containerd-ea2f1dec7cc530aacc4da69889365e193b5abe554d7c39071c3472812eb1101c.scope - libcontainer container ea2f1dec7cc530aacc4da69889365e193b5abe554d7c39071c3472812eb1101c. Jan 28 00:56:02.718522 containerd[1472]: time="2026-01-28T00:56:02.718415433Z" level=info msg="StartContainer for \"ea2f1dec7cc530aacc4da69889365e193b5abe554d7c39071c3472812eb1101c\" returns successfully" Jan 28 00:56:02.959702 kubelet[2539]: I0128 00:56:02.959621 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-c67sh" podStartSLOduration=1.3829397399999999 podStartE2EDuration="5.95960371s" podCreationTimestamp="2026-01-28 00:55:57 +0000 UTC" firstStartedPulling="2026-01-28 00:55:58.020714409 +0000 UTC m=+6.361527557" lastFinishedPulling="2026-01-28 00:56:02.597378389 +0000 UTC m=+10.938191527" observedRunningTime="2026-01-28 00:56:02.959333843 +0000 UTC m=+11.300146981" watchObservedRunningTime="2026-01-28 00:56:02.95960371 +0000 UTC m=+11.300416848" Jan 28 00:56:11.453789 sudo[1637]: pam_unix(sudo:session): session closed for user root Jan 28 00:56:11.464831 sshd[1634]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:11.480003 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:58446.service: Deactivated successfully. Jan 28 00:56:11.489965 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:56:11.490810 systemd[1]: session-7.scope: Consumed 17.476s CPU time, 161.0M memory peak, 0B memory swap peak. Jan 28 00:56:11.492375 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:56:11.499649 systemd-logind[1461]: Removed session 7. Jan 28 00:56:16.376369 kubelet[2539]: I0128 00:56:16.376313 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/01bf00d4-8e01-4e22-beea-05760ae33475-typha-certs\") pod \"calico-typha-6c8b7d5df8-wjcnw\" (UID: \"01bf00d4-8e01-4e22-beea-05760ae33475\") " pod="calico-system/calico-typha-6c8b7d5df8-wjcnw" Jan 28 00:56:16.376369 kubelet[2539]: I0128 00:56:16.376375 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw4h2\" (UniqueName: \"kubernetes.io/projected/01bf00d4-8e01-4e22-beea-05760ae33475-kube-api-access-pw4h2\") pod \"calico-typha-6c8b7d5df8-wjcnw\" (UID: \"01bf00d4-8e01-4e22-beea-05760ae33475\") " pod="calico-system/calico-typha-6c8b7d5df8-wjcnw" Jan 28 00:56:16.377673 kubelet[2539]: I0128 00:56:16.376409 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bf00d4-8e01-4e22-beea-05760ae33475-tigera-ca-bundle\") pod \"calico-typha-6c8b7d5df8-wjcnw\" (UID: \"01bf00d4-8e01-4e22-beea-05760ae33475\") " pod="calico-system/calico-typha-6c8b7d5df8-wjcnw" Jan 28 00:56:16.388679 systemd[1]: Created slice kubepods-besteffort-pod01bf00d4_8e01_4e22_beea_05760ae33475.slice - libcontainer container kubepods-besteffort-pod01bf00d4_8e01_4e22_beea_05760ae33475.slice. Jan 28 00:56:16.603059 systemd[1]: Created slice kubepods-besteffort-podd0bb2249_7cd9_4415_b807_92e69bd752bf.slice - libcontainer container kubepods-besteffort-podd0bb2249_7cd9_4415_b807_92e69bd752bf.slice. Jan 28 00:56:16.678442 kubelet[2539]: I0128 00:56:16.678184 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-cni-net-dir\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678442 kubelet[2539]: I0128 00:56:16.678249 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-lib-modules\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678442 kubelet[2539]: I0128 00:56:16.678269 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-policysync\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678442 kubelet[2539]: I0128 00:56:16.678286 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-cni-log-dir\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678442 kubelet[2539]: I0128 00:56:16.678331 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-flexvol-driver-host\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678745 kubelet[2539]: I0128 00:56:16.678361 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-var-lib-calico\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678745 kubelet[2539]: I0128 00:56:16.678377 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-xtables-lock\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678745 kubelet[2539]: I0128 00:56:16.678399 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0bb2249-7cd9-4415-b807-92e69bd752bf-tigera-ca-bundle\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678745 kubelet[2539]: I0128 00:56:16.678421 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-var-run-calico\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678745 kubelet[2539]: I0128 00:56:16.678439 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d0bb2249-7cd9-4415-b807-92e69bd752bf-node-certs\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678862 kubelet[2539]: I0128 00:56:16.678453 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d0bb2249-7cd9-4415-b807-92e69bd752bf-cni-bin-dir\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.678862 kubelet[2539]: I0128 00:56:16.678469 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpjg\" (UniqueName: \"kubernetes.io/projected/d0bb2249-7cd9-4415-b807-92e69bd752bf-kube-api-access-gwpjg\") pod \"calico-node-vcr9v\" (UID: \"d0bb2249-7cd9-4415-b807-92e69bd752bf\") " pod="calico-system/calico-node-vcr9v" Jan 28 00:56:16.698970 kubelet[2539]: E0128 00:56:16.698833 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:16.700204 containerd[1472]: time="2026-01-28T00:56:16.700100241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8b7d5df8-wjcnw,Uid:01bf00d4-8e01-4e22-beea-05760ae33475,Namespace:calico-system,Attempt:0,}" Jan 28 00:56:16.767961 containerd[1472]: time="2026-01-28T00:56:16.767144750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:16.767961 containerd[1472]: time="2026-01-28T00:56:16.767344190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:16.767961 containerd[1472]: time="2026-01-28T00:56:16.767370372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:16.772038 containerd[1472]: time="2026-01-28T00:56:16.767879169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:16.803598 kubelet[2539]: E0128 00:56:16.803328 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.803598 kubelet[2539]: W0128 00:56:16.803479 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.804619 kubelet[2539]: E0128 00:56:16.804017 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.805849 kubelet[2539]: E0128 00:56:16.805815 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.806186 kubelet[2539]: W0128 00:56:16.806022 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.806496 kubelet[2539]: E0128 00:56:16.806047 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.818434 kubelet[2539]: E0128 00:56:16.818382 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.818434 kubelet[2539]: W0128 00:56:16.818434 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.818630 kubelet[2539]: E0128 00:56:16.818462 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.823487 kubelet[2539]: E0128 00:56:16.823395 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:16.855964 systemd[1]: Started cri-containerd-6f20dc0561433f86409cbefac336e8070ceebe270dac14cebf0327eac5527618.scope - libcontainer container 6f20dc0561433f86409cbefac336e8070ceebe270dac14cebf0327eac5527618. Jan 28 00:56:16.880868 kubelet[2539]: E0128 00:56:16.880768 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.880868 kubelet[2539]: W0128 00:56:16.880824 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.880868 kubelet[2539]: E0128 00:56:16.880856 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.882608 kubelet[2539]: E0128 00:56:16.882487 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.882608 kubelet[2539]: W0128 00:56:16.882539 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.882608 kubelet[2539]: E0128 00:56:16.882566 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.883536 kubelet[2539]: E0128 00:56:16.883404 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.883536 kubelet[2539]: W0128 00:56:16.883475 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.883536 kubelet[2539]: E0128 00:56:16.883521 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.884745 kubelet[2539]: E0128 00:56:16.884505 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.884745 kubelet[2539]: W0128 00:56:16.884564 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.884870 kubelet[2539]: E0128 00:56:16.884850 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.888245 kubelet[2539]: E0128 00:56:16.888028 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.888319 kubelet[2539]: W0128 00:56:16.888175 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.888426 kubelet[2539]: E0128 00:56:16.888298 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.889489 kubelet[2539]: E0128 00:56:16.889375 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.889489 kubelet[2539]: W0128 00:56:16.889394 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.889489 kubelet[2539]: E0128 00:56:16.889416 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.891134 kubelet[2539]: E0128 00:56:16.890178 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.891134 kubelet[2539]: W0128 00:56:16.890194 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.891134 kubelet[2539]: E0128 00:56:16.890262 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.891250 kubelet[2539]: E0128 00:56:16.891115 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.891250 kubelet[2539]: W0128 00:56:16.891239 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.891393 kubelet[2539]: E0128 00:56:16.891326 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.892554 kubelet[2539]: E0128 00:56:16.892485 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.892554 kubelet[2539]: W0128 00:56:16.892535 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.892858 kubelet[2539]: E0128 00:56:16.892656 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.894139 kubelet[2539]: E0128 00:56:16.894077 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.894139 kubelet[2539]: W0128 00:56:16.894120 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.894139 kubelet[2539]: E0128 00:56:16.894142 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.895106 kubelet[2539]: E0128 00:56:16.894585 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.895106 kubelet[2539]: W0128 00:56:16.894634 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.895106 kubelet[2539]: E0128 00:56:16.894653 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.897347 kubelet[2539]: E0128 00:56:16.896239 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.897347 kubelet[2539]: W0128 00:56:16.896260 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.897347 kubelet[2539]: E0128 00:56:16.896279 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.897347 kubelet[2539]: E0128 00:56:16.897303 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.897347 kubelet[2539]: W0128 00:56:16.897326 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.897536 kubelet[2539]: E0128 00:56:16.897350 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.897536 kubelet[2539]: I0128 00:56:16.897506 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bc0d5231-81be-4bd5-ba52-4066772e339a-socket-dir\") pod \"csi-node-driver-5dcgj\" (UID: \"bc0d5231-81be-4bd5-ba52-4066772e339a\") " pod="calico-system/csi-node-driver-5dcgj" Jan 28 00:56:16.899238 kubelet[2539]: E0128 00:56:16.898259 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.899238 kubelet[2539]: W0128 00:56:16.898278 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.899238 kubelet[2539]: E0128 00:56:16.898294 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.899238 kubelet[2539]: I0128 00:56:16.898376 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc0d5231-81be-4bd5-ba52-4066772e339a-kubelet-dir\") pod \"csi-node-driver-5dcgj\" (UID: \"bc0d5231-81be-4bd5-ba52-4066772e339a\") " pod="calico-system/csi-node-driver-5dcgj" Jan 28 00:56:16.899486 kubelet[2539]: E0128 00:56:16.899444 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.899486 kubelet[2539]: W0128 00:56:16.899458 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.899486 kubelet[2539]: E0128 00:56:16.899472 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.899859 kubelet[2539]: I0128 00:56:16.899715 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bc0d5231-81be-4bd5-ba52-4066772e339a-registration-dir\") pod \"csi-node-driver-5dcgj\" (UID: \"bc0d5231-81be-4bd5-ba52-4066772e339a\") " pod="calico-system/csi-node-driver-5dcgj" Jan 28 00:56:16.900215 kubelet[2539]: E0128 00:56:16.900163 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.900215 kubelet[2539]: W0128 00:56:16.900207 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.900317 kubelet[2539]: E0128 00:56:16.900224 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.900832 kubelet[2539]: E0128 00:56:16.900761 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.900832 kubelet[2539]: W0128 00:56:16.900807 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.900832 kubelet[2539]: E0128 00:56:16.900823 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.901469 kubelet[2539]: E0128 00:56:16.901424 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.901469 kubelet[2539]: W0128 00:56:16.901467 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.901579 kubelet[2539]: E0128 00:56:16.901485 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.903381 kubelet[2539]: E0128 00:56:16.902246 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.903381 kubelet[2539]: W0128 00:56:16.902266 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.903381 kubelet[2539]: E0128 00:56:16.902281 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.903972 kubelet[2539]: E0128 00:56:16.903866 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.904058 kubelet[2539]: W0128 00:56:16.903888 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.904207 kubelet[2539]: E0128 00:56:16.904122 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.905187 kubelet[2539]: E0128 00:56:16.905170 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.905262 kubelet[2539]: W0128 00:56:16.905249 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.905310 kubelet[2539]: E0128 00:56:16.905299 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.905966 kubelet[2539]: E0128 00:56:16.905865 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.906081 kubelet[2539]: W0128 00:56:16.905885 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.906196 kubelet[2539]: E0128 00:56:16.906180 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.906817 kubelet[2539]: E0128 00:56:16.906800 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.907029 kubelet[2539]: W0128 00:56:16.906874 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.907029 kubelet[2539]: E0128 00:56:16.906967 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.908461 kubelet[2539]: E0128 00:56:16.908370 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.908990 kubelet[2539]: W0128 00:56:16.908657 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.909107 kubelet[2539]: E0128 00:56:16.909086 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.909867 kubelet[2539]: E0128 00:56:16.909842 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.909867 kubelet[2539]: W0128 00:56:16.909859 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.909985 kubelet[2539]: E0128 00:56:16.909874 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.911111 kubelet[2539]: E0128 00:56:16.910963 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.911111 kubelet[2539]: W0128 00:56:16.910982 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.911111 kubelet[2539]: E0128 00:56:16.910999 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.911550 kubelet[2539]: E0128 00:56:16.911495 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.911550 kubelet[2539]: W0128 00:56:16.911539 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.911630 kubelet[2539]: E0128 00:56:16.911557 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.912295 kubelet[2539]: E0128 00:56:16.912215 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.912295 kubelet[2539]: W0128 00:56:16.912235 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.912295 kubelet[2539]: E0128 00:56:16.912251 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.912970 kubelet[2539]: E0128 00:56:16.912780 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:16.912970 kubelet[2539]: W0128 00:56:16.912798 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:16.912970 kubelet[2539]: E0128 00:56:16.912813 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:16.916859 kubelet[2539]: E0128 00:56:16.916357 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:16.922384 containerd[1472]: time="2026-01-28T00:56:16.922232316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vcr9v,Uid:d0bb2249-7cd9-4415-b807-92e69bd752bf,Namespace:calico-system,Attempt:0,}" Jan 28 00:56:16.991019 containerd[1472]: time="2026-01-28T00:56:16.990135155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:16.991019 containerd[1472]: time="2026-01-28T00:56:16.990261593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:16.991019 containerd[1472]: time="2026-01-28T00:56:16.990279719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:16.991019 containerd[1472]: time="2026-01-28T00:56:16.990451265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:17.003332 kubelet[2539]: E0128 00:56:17.003265 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.003332 kubelet[2539]: W0128 00:56:17.003326 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.003533 kubelet[2539]: E0128 00:56:17.003364 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.004808 kubelet[2539]: E0128 00:56:17.004008 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.004808 kubelet[2539]: W0128 00:56:17.004026 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.004808 kubelet[2539]: E0128 00:56:17.004040 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.004808 kubelet[2539]: E0128 00:56:17.004777 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.004808 kubelet[2539]: W0128 00:56:17.004791 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.004808 kubelet[2539]: E0128 00:56:17.004805 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.005434 kubelet[2539]: E0128 00:56:17.005369 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.005434 kubelet[2539]: W0128 00:56:17.005412 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.005434 kubelet[2539]: E0128 00:56:17.005431 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.006082 kubelet[2539]: E0128 00:56:17.006012 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.006082 kubelet[2539]: W0128 00:56:17.006057 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.006082 kubelet[2539]: E0128 00:56:17.006076 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.006598 kubelet[2539]: E0128 00:56:17.006512 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.006598 kubelet[2539]: W0128 00:56:17.006555 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.006598 kubelet[2539]: E0128 00:56:17.006570 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.007166 kubelet[2539]: E0128 00:56:17.007100 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.007166 kubelet[2539]: W0128 00:56:17.007144 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.007166 kubelet[2539]: E0128 00:56:17.007162 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.007708 kubelet[2539]: E0128 00:56:17.007610 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.007708 kubelet[2539]: W0128 00:56:17.007651 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.007708 kubelet[2539]: E0128 00:56:17.007665 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.008044 kubelet[2539]: I0128 00:56:17.007887 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkwwh\" (UniqueName: \"kubernetes.io/projected/bc0d5231-81be-4bd5-ba52-4066772e339a-kube-api-access-dkwwh\") pod \"csi-node-driver-5dcgj\" (UID: \"bc0d5231-81be-4bd5-ba52-4066772e339a\") " pod="calico-system/csi-node-driver-5dcgj" Jan 28 00:56:17.008208 kubelet[2539]: E0128 00:56:17.008140 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.008208 kubelet[2539]: W0128 00:56:17.008184 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.008208 kubelet[2539]: E0128 00:56:17.008200 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.009484 kubelet[2539]: E0128 00:56:17.009199 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.009484 kubelet[2539]: W0128 00:56:17.009414 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.009484 kubelet[2539]: E0128 00:56:17.009430 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.012055 kubelet[2539]: E0128 00:56:17.011991 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.012055 kubelet[2539]: W0128 00:56:17.012024 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.012055 kubelet[2539]: E0128 00:56:17.012037 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.012585 kubelet[2539]: E0128 00:56:17.012537 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.012585 kubelet[2539]: W0128 00:56:17.012570 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.012585 kubelet[2539]: E0128 00:56:17.012586 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.013131 kubelet[2539]: I0128 00:56:17.012991 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bc0d5231-81be-4bd5-ba52-4066772e339a-varrun\") pod \"csi-node-driver-5dcgj\" (UID: \"bc0d5231-81be-4bd5-ba52-4066772e339a\") " pod="calico-system/csi-node-driver-5dcgj" Jan 28 00:56:17.013211 kubelet[2539]: E0128 00:56:17.013142 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.013211 kubelet[2539]: W0128 00:56:17.013156 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.013211 kubelet[2539]: E0128 00:56:17.013169 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.013775 kubelet[2539]: E0128 00:56:17.013643 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.013775 kubelet[2539]: W0128 00:56:17.013657 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.013775 kubelet[2539]: E0128 00:56:17.013667 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.014489 kubelet[2539]: E0128 00:56:17.014173 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.014489 kubelet[2539]: W0128 00:56:17.014187 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.014489 kubelet[2539]: E0128 00:56:17.014198 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.014595 kubelet[2539]: E0128 00:56:17.014556 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.014595 kubelet[2539]: W0128 00:56:17.014567 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.014595 kubelet[2539]: E0128 00:56:17.014576 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.015495 kubelet[2539]: E0128 00:56:17.014964 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.015495 kubelet[2539]: W0128 00:56:17.014976 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.015495 kubelet[2539]: E0128 00:56:17.014986 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.015495 kubelet[2539]: E0128 00:56:17.015324 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.015495 kubelet[2539]: W0128 00:56:17.015334 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.015495 kubelet[2539]: E0128 00:56:17.015344 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.015868 kubelet[2539]: E0128 00:56:17.015716 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.015868 kubelet[2539]: W0128 00:56:17.015727 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.015868 kubelet[2539]: E0128 00:56:17.015738 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.016258 kubelet[2539]: E0128 00:56:17.016203 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.016258 kubelet[2539]: W0128 00:56:17.016238 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.016258 kubelet[2539]: E0128 00:56:17.016249 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.016987 kubelet[2539]: E0128 00:56:17.016809 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.016987 kubelet[2539]: W0128 00:56:17.016823 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.016987 kubelet[2539]: E0128 00:56:17.016837 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.022965 containerd[1472]: time="2026-01-28T00:56:17.022759136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8b7d5df8-wjcnw,Uid:01bf00d4-8e01-4e22-beea-05760ae33475,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f20dc0561433f86409cbefac336e8070ceebe270dac14cebf0327eac5527618\"" Jan 28 00:56:17.033198 kubelet[2539]: E0128 00:56:17.030668 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:17.036521 systemd[1]: Started cri-containerd-0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12.scope - libcontainer container 0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12. Jan 28 00:56:17.039521 containerd[1472]: time="2026-01-28T00:56:17.038391062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 00:56:17.099182 containerd[1472]: time="2026-01-28T00:56:17.099035655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vcr9v,Uid:d0bb2249-7cd9-4415-b807-92e69bd752bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12\"" Jan 28 00:56:17.100496 kubelet[2539]: E0128 00:56:17.100419 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:17.114388 kubelet[2539]: E0128 00:56:17.114296 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.114388 kubelet[2539]: W0128 00:56:17.114352 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.114388 kubelet[2539]: E0128 00:56:17.114385 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.115251 kubelet[2539]: E0128 00:56:17.115190 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.115251 kubelet[2539]: W0128 00:56:17.115236 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.115363 kubelet[2539]: E0128 00:56:17.115257 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.115857 kubelet[2539]: E0128 00:56:17.115798 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.115857 kubelet[2539]: W0128 00:56:17.115836 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.115857 kubelet[2539]: E0128 00:56:17.115851 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.116454 kubelet[2539]: E0128 00:56:17.116408 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.116454 kubelet[2539]: W0128 00:56:17.116441 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.116454 kubelet[2539]: E0128 00:56:17.116455 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.116974 kubelet[2539]: E0128 00:56:17.116886 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.117016 kubelet[2539]: W0128 00:56:17.116976 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.117016 kubelet[2539]: E0128 00:56:17.116993 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.117516 kubelet[2539]: E0128 00:56:17.117407 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.117516 kubelet[2539]: W0128 00:56:17.117438 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.117516 kubelet[2539]: E0128 00:56:17.117451 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.117975 kubelet[2539]: E0128 00:56:17.117862 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.117975 kubelet[2539]: W0128 00:56:17.117959 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.117975 kubelet[2539]: E0128 00:56:17.117972 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.118502 kubelet[2539]: E0128 00:56:17.118442 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.118502 kubelet[2539]: W0128 00:56:17.118475 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.118502 kubelet[2539]: E0128 00:56:17.118487 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.118972 kubelet[2539]: E0128 00:56:17.118884 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.118972 kubelet[2539]: W0128 00:56:17.118959 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.118972 kubelet[2539]: E0128 00:56:17.118971 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.119416 kubelet[2539]: E0128 00:56:17.119347 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.119416 kubelet[2539]: W0128 00:56:17.119381 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.119416 kubelet[2539]: E0128 00:56:17.119392 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:17.143878 kubelet[2539]: E0128 00:56:17.143811 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:17.143878 kubelet[2539]: W0128 00:56:17.143863 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:17.143878 kubelet[2539]: E0128 00:56:17.143965 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:18.477750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170729241.mount: Deactivated successfully. Jan 28 00:56:18.904288 kubelet[2539]: E0128 00:56:18.904106 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:20.068856 containerd[1472]: time="2026-01-28T00:56:20.068731090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:20.069839 containerd[1472]: time="2026-01-28T00:56:20.069742045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 28 00:56:20.071197 containerd[1472]: time="2026-01-28T00:56:20.071131696Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:20.074072 containerd[1472]: time="2026-01-28T00:56:20.073830183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:20.075485 containerd[1472]: time="2026-01-28T00:56:20.075412339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.036952682s" Jan 28 00:56:20.075544 containerd[1472]: time="2026-01-28T00:56:20.075483709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 00:56:20.077118 containerd[1472]: time="2026-01-28T00:56:20.077072057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 00:56:20.098991 containerd[1472]: time="2026-01-28T00:56:20.098844568Z" level=info msg="CreateContainer within sandbox \"6f20dc0561433f86409cbefac336e8070ceebe270dac14cebf0327eac5527618\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 00:56:20.128722 containerd[1472]: time="2026-01-28T00:56:20.126675045Z" level=info msg="CreateContainer within sandbox \"6f20dc0561433f86409cbefac336e8070ceebe270dac14cebf0327eac5527618\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06f89e69e8f119adfae5c54bf9faa9757c9850cd130442f035514e28afbb1ec5\"" Jan 28 00:56:20.139839 containerd[1472]: time="2026-01-28T00:56:20.139801854Z" level=info msg="StartContainer for \"06f89e69e8f119adfae5c54bf9faa9757c9850cd130442f035514e28afbb1ec5\"" Jan 28 00:56:20.192205 systemd[1]: Started cri-containerd-06f89e69e8f119adfae5c54bf9faa9757c9850cd130442f035514e28afbb1ec5.scope - libcontainer container 06f89e69e8f119adfae5c54bf9faa9757c9850cd130442f035514e28afbb1ec5. Jan 28 00:56:20.261698 containerd[1472]: time="2026-01-28T00:56:20.260831086Z" level=info msg="StartContainer for \"06f89e69e8f119adfae5c54bf9faa9757c9850cd130442f035514e28afbb1ec5\" returns successfully" Jan 28 00:56:20.367870 kubelet[2539]: E0128 00:56:20.363143 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:20.441700 kubelet[2539]: I0128 00:56:20.424170 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c8b7d5df8-wjcnw" podStartSLOduration=1.38443761 podStartE2EDuration="4.42404182s" podCreationTimestamp="2026-01-28 00:56:16 +0000 UTC" firstStartedPulling="2026-01-28 00:56:17.037164851 +0000 UTC m=+25.377977988" lastFinishedPulling="2026-01-28 00:56:20.07676906 +0000 UTC m=+28.417582198" observedRunningTime="2026-01-28 00:56:20.42320318 +0000 UTC m=+28.764016338" watchObservedRunningTime="2026-01-28 00:56:20.42404182 +0000 UTC m=+28.764854958" Jan 28 00:56:20.492422 kubelet[2539]: E0128 00:56:20.453845 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.492422 kubelet[2539]: W0128 00:56:20.454546 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.492422 kubelet[2539]: E0128 00:56:20.454581 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.492422 kubelet[2539]: E0128 00:56:20.456267 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.492422 kubelet[2539]: W0128 00:56:20.456278 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.492422 kubelet[2539]: E0128 00:56:20.456290 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.492422 kubelet[2539]: E0128 00:56:20.458080 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.492422 kubelet[2539]: W0128 00:56:20.458094 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.492422 kubelet[2539]: E0128 00:56:20.458106 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.492422 kubelet[2539]: E0128 00:56:20.459464 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.492764 kubelet[2539]: W0128 00:56:20.459475 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.492764 kubelet[2539]: E0128 00:56:20.459486 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.492764 kubelet[2539]: E0128 00:56:20.460985 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.492764 kubelet[2539]: W0128 00:56:20.461036 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.492764 kubelet[2539]: E0128 00:56:20.461179 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.492764 kubelet[2539]: E0128 00:56:20.461565 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.492764 kubelet[2539]: W0128 00:56:20.461576 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.492764 kubelet[2539]: E0128 00:56:20.461588 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.492764 kubelet[2539]: E0128 00:56:20.463454 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.492764 kubelet[2539]: W0128 00:56:20.463532 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.494627 kubelet[2539]: E0128 00:56:20.463550 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.494627 kubelet[2539]: E0128 00:56:20.468598 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.494627 kubelet[2539]: W0128 00:56:20.468611 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.494627 kubelet[2539]: E0128 00:56:20.468693 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.494627 kubelet[2539]: E0128 00:56:20.473745 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.494627 kubelet[2539]: W0128 00:56:20.473758 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.494627 kubelet[2539]: E0128 00:56:20.473772 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.494627 kubelet[2539]: E0128 00:56:20.475464 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.494627 kubelet[2539]: W0128 00:56:20.475475 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.494627 kubelet[2539]: E0128 00:56:20.475485 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.495555 kubelet[2539]: E0128 00:56:20.476613 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.495555 kubelet[2539]: W0128 00:56:20.476665 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.495555 kubelet[2539]: E0128 00:56:20.476677 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.495555 kubelet[2539]: E0128 00:56:20.477578 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.495555 kubelet[2539]: W0128 00:56:20.477594 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.495555 kubelet[2539]: E0128 00:56:20.477608 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.495555 kubelet[2539]: E0128 00:56:20.479635 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.495555 kubelet[2539]: W0128 00:56:20.479647 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.495555 kubelet[2539]: E0128 00:56:20.479948 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.495555 kubelet[2539]: E0128 00:56:20.481162 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.495950 kubelet[2539]: W0128 00:56:20.481208 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.495950 kubelet[2539]: E0128 00:56:20.481222 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.495950 kubelet[2539]: E0128 00:56:20.482789 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.495950 kubelet[2539]: W0128 00:56:20.482801 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.495950 kubelet[2539]: E0128 00:56:20.482812 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.497149 kubelet[2539]: E0128 00:56:20.496592 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.497149 kubelet[2539]: W0128 00:56:20.496625 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.497149 kubelet[2539]: E0128 00:56:20.496656 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.503673 kubelet[2539]: E0128 00:56:20.502670 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.503673 kubelet[2539]: W0128 00:56:20.502692 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.503673 kubelet[2539]: E0128 00:56:20.502715 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.522305 kubelet[2539]: E0128 00:56:20.521636 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.535403 kubelet[2539]: W0128 00:56:20.528634 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.540323 kubelet[2539]: E0128 00:56:20.539819 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.550367 kubelet[2539]: E0128 00:56:20.550300 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.550367 kubelet[2539]: W0128 00:56:20.550351 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.550534 kubelet[2539]: E0128 00:56:20.550389 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.551276 kubelet[2539]: E0128 00:56:20.550840 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.551276 kubelet[2539]: W0128 00:56:20.550856 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.551276 kubelet[2539]: E0128 00:56:20.550875 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.551519 kubelet[2539]: E0128 00:56:20.551462 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.551519 kubelet[2539]: W0128 00:56:20.551483 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.551519 kubelet[2539]: E0128 00:56:20.551499 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.552546 kubelet[2539]: E0128 00:56:20.552477 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.552546 kubelet[2539]: W0128 00:56:20.552524 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.552546 kubelet[2539]: E0128 00:56:20.552541 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.553343 kubelet[2539]: E0128 00:56:20.553277 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.553343 kubelet[2539]: W0128 00:56:20.553321 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.553343 kubelet[2539]: E0128 00:56:20.553338 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.556343 kubelet[2539]: E0128 00:56:20.556299 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.556343 kubelet[2539]: W0128 00:56:20.556319 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.556343 kubelet[2539]: E0128 00:56:20.556336 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.556793 kubelet[2539]: E0128 00:56:20.556762 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.556793 kubelet[2539]: W0128 00:56:20.556779 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.556793 kubelet[2539]: E0128 00:56:20.556796 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.559107 kubelet[2539]: E0128 00:56:20.559038 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.559107 kubelet[2539]: W0128 00:56:20.559063 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.559107 kubelet[2539]: E0128 00:56:20.559083 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.561391 kubelet[2539]: E0128 00:56:20.561354 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.561613 kubelet[2539]: W0128 00:56:20.561494 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.561613 kubelet[2539]: E0128 00:56:20.561534 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.562354 kubelet[2539]: E0128 00:56:20.562300 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.562354 kubelet[2539]: W0128 00:56:20.562320 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.562354 kubelet[2539]: E0128 00:56:20.562335 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.565421 kubelet[2539]: E0128 00:56:20.565074 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.565421 kubelet[2539]: W0128 00:56:20.565096 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.565421 kubelet[2539]: E0128 00:56:20.565113 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.565780 kubelet[2539]: E0128 00:56:20.565761 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.565885 kubelet[2539]: W0128 00:56:20.565867 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.566218 kubelet[2539]: E0128 00:56:20.566108 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.567802 kubelet[2539]: E0128 00:56:20.567783 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.568797 kubelet[2539]: W0128 00:56:20.568052 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.568797 kubelet[2539]: E0128 00:56:20.568078 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.569143 kubelet[2539]: E0128 00:56:20.568952 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.569143 kubelet[2539]: W0128 00:56:20.568968 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.569143 kubelet[2539]: E0128 00:56:20.568999 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.571716 kubelet[2539]: E0128 00:56:20.571592 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:56:20.571716 kubelet[2539]: W0128 00:56:20.571640 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:56:20.571716 kubelet[2539]: E0128 00:56:20.571659 2539 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:56:20.784634 containerd[1472]: time="2026-01-28T00:56:20.784471621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:20.788055 containerd[1472]: time="2026-01-28T00:56:20.787956196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 28 00:56:20.791377 containerd[1472]: time="2026-01-28T00:56:20.789683483Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:20.793117 containerd[1472]: time="2026-01-28T00:56:20.793065864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:20.795143 containerd[1472]: time="2026-01-28T00:56:20.794766154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 717.644921ms" Jan 28 00:56:20.795143 containerd[1472]: time="2026-01-28T00:56:20.794863294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 00:56:20.801476 containerd[1472]: time="2026-01-28T00:56:20.801417071Z" level=info msg="CreateContainer within sandbox \"0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 00:56:20.819783 containerd[1472]: time="2026-01-28T00:56:20.819689081Z" level=info msg="CreateContainer within sandbox \"0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723\"" Jan 28 00:56:20.820401 containerd[1472]: time="2026-01-28T00:56:20.820366385Z" level=info msg="StartContainer for \"236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723\"" Jan 28 00:56:20.897198 systemd[1]: Started cri-containerd-236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723.scope - libcontainer container 236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723. Jan 28 00:56:20.903655 kubelet[2539]: E0128 00:56:20.903524 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:20.968333 containerd[1472]: time="2026-01-28T00:56:20.968265555Z" level=info msg="StartContainer for \"236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723\" returns successfully" Jan 28 00:56:20.990393 systemd[1]: cri-containerd-236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723.scope: Deactivated successfully. Jan 28 00:56:21.150981 containerd[1472]: time="2026-01-28T00:56:21.150650296Z" level=info msg="shim disconnected" id=236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723 namespace=k8s.io Jan 28 00:56:21.151565 containerd[1472]: time="2026-01-28T00:56:21.150984900Z" level=warning msg="cleaning up after shim disconnected" id=236fb6f550ccaec18470d55cb5bab3d58a41faf8acdf6b16181b11dde25e5723 namespace=k8s.io Jan 28 00:56:21.151565 containerd[1472]: time="2026-01-28T00:56:21.151035469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:56:21.369353 kubelet[2539]: I0128 00:56:21.369272 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 00:56:21.370292 kubelet[2539]: E0128 00:56:21.369869 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:21.370578 kubelet[2539]: E0128 00:56:21.370470 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:21.372374 containerd[1472]: time="2026-01-28T00:56:21.372271501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 00:56:22.904245 kubelet[2539]: E0128 00:56:22.904093 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:24.155371 containerd[1472]: time="2026-01-28T00:56:24.154723045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:24.159059 containerd[1472]: time="2026-01-28T00:56:24.156755553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 00:56:24.159126 containerd[1472]: time="2026-01-28T00:56:24.159090991Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:24.163506 containerd[1472]: time="2026-01-28T00:56:24.163407150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:24.166130 containerd[1472]: time="2026-01-28T00:56:24.165887735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.793528012s" Jan 28 00:56:24.166130 containerd[1472]: time="2026-01-28T00:56:24.166038950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 00:56:24.179165 containerd[1472]: time="2026-01-28T00:56:24.179073876Z" level=info msg="CreateContainer within sandbox \"0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 00:56:24.210736 containerd[1472]: time="2026-01-28T00:56:24.210650554Z" level=info msg="CreateContainer within sandbox \"0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5\"" Jan 28 00:56:24.212365 containerd[1472]: time="2026-01-28T00:56:24.212168593Z" level=info msg="StartContainer for \"e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5\"" Jan 28 00:56:24.312232 systemd[1]: Started cri-containerd-e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5.scope - libcontainer container e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5. Jan 28 00:56:24.408726 containerd[1472]: time="2026-01-28T00:56:24.408477497Z" level=info msg="StartContainer for \"e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5\" returns successfully" Jan 28 00:56:24.904218 kubelet[2539]: E0128 00:56:24.904028 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:25.388975 kubelet[2539]: E0128 00:56:25.388869 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:25.579525 containerd[1472]: time="2026-01-28T00:56:25.579043020Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:56:25.584072 systemd[1]: cri-containerd-e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5.scope: Deactivated successfully. Jan 28 00:56:25.585001 systemd[1]: cri-containerd-e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5.scope: Consumed 1.364s CPU time. Jan 28 00:56:25.628776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5-rootfs.mount: Deactivated successfully. Jan 28 00:56:25.631822 containerd[1472]: time="2026-01-28T00:56:25.631762708Z" level=info msg="shim disconnected" id=e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5 namespace=k8s.io Jan 28 00:56:25.632147 containerd[1472]: time="2026-01-28T00:56:25.632034319Z" level=warning msg="cleaning up after shim disconnected" id=e981772b4fe0332583c5544b2ad0e5f27e0005deedf9643580dc79e588da26f5 namespace=k8s.io Jan 28 00:56:25.632147 containerd[1472]: time="2026-01-28T00:56:25.632073574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:56:25.742313 kubelet[2539]: I0128 00:56:25.738134 2539 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 28 00:56:25.819185 kubelet[2539]: I0128 00:56:25.818991 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ba832a-f5ed-4839-b1f4-f7bf87e5f87b-config-volume\") pod \"coredns-66bc5c9577-bpv6m\" (UID: \"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b\") " pod="kube-system/coredns-66bc5c9577-bpv6m" Jan 28 00:56:25.819185 kubelet[2539]: I0128 00:56:25.819060 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59p9g\" (UniqueName: \"kubernetes.io/projected/71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4-kube-api-access-59p9g\") pod \"calico-apiserver-8f958c6dc-2kx8s\" (UID: \"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4\") " pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" Jan 28 00:56:25.819960 kubelet[2539]: I0128 00:56:25.819145 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wxrr\" (UniqueName: \"kubernetes.io/projected/0417d323-0fbe-457b-a078-73d52ee9f54e-kube-api-access-5wxrr\") pod \"goldmane-7c778bb748-x77cp\" (UID: \"0417d323-0fbe-457b-a078-73d52ee9f54e\") " pod="calico-system/goldmane-7c778bb748-x77cp" Jan 28 00:56:25.819960 kubelet[2539]: I0128 00:56:25.819245 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4-calico-apiserver-certs\") pod \"calico-apiserver-8f958c6dc-2kx8s\" (UID: \"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4\") " pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" Jan 28 00:56:25.819960 kubelet[2539]: I0128 00:56:25.819277 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db03c4f9-a045-4c3b-829c-be263339420b-config-volume\") pod \"coredns-66bc5c9577-z7h5v\" (UID: \"db03c4f9-a045-4c3b-829c-be263339420b\") " pod="kube-system/coredns-66bc5c9577-z7h5v" Jan 28 00:56:25.820348 systemd[1]: Created slice kubepods-burstable-pod52ba832a_f5ed_4839_b1f4_f7bf87e5f87b.slice - libcontainer container kubepods-burstable-pod52ba832a_f5ed_4839_b1f4_f7bf87e5f87b.slice. Jan 28 00:56:25.831750 kubelet[2539]: I0128 00:56:25.831529 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsssx\" (UniqueName: \"kubernetes.io/projected/db03c4f9-a045-4c3b-829c-be263339420b-kube-api-access-bsssx\") pod \"coredns-66bc5c9577-z7h5v\" (UID: \"db03c4f9-a045-4c3b-829c-be263339420b\") " pod="kube-system/coredns-66bc5c9577-z7h5v" Jan 28 00:56:25.831750 kubelet[2539]: I0128 00:56:25.831583 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46kmx\" (UniqueName: \"kubernetes.io/projected/52ba832a-f5ed-4839-b1f4-f7bf87e5f87b-kube-api-access-46kmx\") pod \"coredns-66bc5c9577-bpv6m\" (UID: \"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b\") " pod="kube-system/coredns-66bc5c9577-bpv6m" Jan 28 00:56:25.831750 kubelet[2539]: I0128 00:56:25.831614 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0417d323-0fbe-457b-a078-73d52ee9f54e-goldmane-key-pair\") pod \"goldmane-7c778bb748-x77cp\" (UID: \"0417d323-0fbe-457b-a078-73d52ee9f54e\") " pod="calico-system/goldmane-7c778bb748-x77cp" Jan 28 00:56:25.831750 kubelet[2539]: I0128 00:56:25.831652 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0417d323-0fbe-457b-a078-73d52ee9f54e-config\") pod \"goldmane-7c778bb748-x77cp\" (UID: \"0417d323-0fbe-457b-a078-73d52ee9f54e\") " pod="calico-system/goldmane-7c778bb748-x77cp" Jan 28 00:56:25.831750 kubelet[2539]: I0128 00:56:25.831681 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0417d323-0fbe-457b-a078-73d52ee9f54e-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-x77cp\" (UID: \"0417d323-0fbe-457b-a078-73d52ee9f54e\") " pod="calico-system/goldmane-7c778bb748-x77cp" Jan 28 00:56:25.844690 systemd[1]: Created slice kubepods-besteffort-pod71b0d016_5a81_42fa_b2b3_99cd7fbb3ba4.slice - libcontainer container kubepods-besteffort-pod71b0d016_5a81_42fa_b2b3_99cd7fbb3ba4.slice. Jan 28 00:56:25.855011 systemd[1]: Created slice kubepods-burstable-poddb03c4f9_a045_4c3b_829c_be263339420b.slice - libcontainer container kubepods-burstable-poddb03c4f9_a045_4c3b_829c_be263339420b.slice. Jan 28 00:56:25.866216 systemd[1]: Created slice kubepods-besteffort-pod0417d323_0fbe_457b_a078_73d52ee9f54e.slice - libcontainer container kubepods-besteffort-pod0417d323_0fbe_457b_a078_73d52ee9f54e.slice. Jan 28 00:56:25.880533 systemd[1]: Created slice kubepods-besteffort-poddb3e5b3d_8d48_4187_bdaf_770b7259aaa2.slice - libcontainer container kubepods-besteffort-poddb3e5b3d_8d48_4187_bdaf_770b7259aaa2.slice. Jan 28 00:56:25.888307 systemd[1]: Created slice kubepods-besteffort-pod46359908_c135_4bdb_a8c6_cd78df04dc7a.slice - libcontainer container kubepods-besteffort-pod46359908_c135_4bdb_a8c6_cd78df04dc7a.slice. Jan 28 00:56:25.899749 systemd[1]: Created slice kubepods-besteffort-pod0d4e568d_f278_4de9_a835_c39874b224a5.slice - libcontainer container kubepods-besteffort-pod0d4e568d_f278_4de9_a835_c39874b224a5.slice. Jan 28 00:56:25.933864 kubelet[2539]: I0128 00:56:25.933077 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db3e5b3d-8d48-4187-bdaf-770b7259aaa2-calico-apiserver-certs\") pod \"calico-apiserver-8f958c6dc-zhbk8\" (UID: \"db3e5b3d-8d48-4187-bdaf-770b7259aaa2\") " pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" Jan 28 00:56:25.933864 kubelet[2539]: I0128 00:56:25.933629 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr7wg\" (UniqueName: \"kubernetes.io/projected/db3e5b3d-8d48-4187-bdaf-770b7259aaa2-kube-api-access-nr7wg\") pod \"calico-apiserver-8f958c6dc-zhbk8\" (UID: \"db3e5b3d-8d48-4187-bdaf-770b7259aaa2\") " pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" Jan 28 00:56:25.933864 kubelet[2539]: I0128 00:56:25.933732 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-backend-key-pair\") pod \"whisker-8c998d6db-8qxtw\" (UID: \"46359908-c135-4bdb-a8c6-cd78df04dc7a\") " pod="calico-system/whisker-8c998d6db-8qxtw" Jan 28 00:56:25.933864 kubelet[2539]: I0128 00:56:25.933761 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28nxd\" (UniqueName: \"kubernetes.io/projected/46359908-c135-4bdb-a8c6-cd78df04dc7a-kube-api-access-28nxd\") pod \"whisker-8c998d6db-8qxtw\" (UID: \"46359908-c135-4bdb-a8c6-cd78df04dc7a\") " pod="calico-system/whisker-8c998d6db-8qxtw" Jan 28 00:56:25.933864 kubelet[2539]: I0128 00:56:25.933800 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4e568d-f278-4de9-a835-c39874b224a5-tigera-ca-bundle\") pod \"calico-kube-controllers-5ff6b57675-s9qlm\" (UID: \"0d4e568d-f278-4de9-a835-c39874b224a5\") " pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" Jan 28 00:56:25.934674 kubelet[2539]: I0128 00:56:25.933837 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-ca-bundle\") pod \"whisker-8c998d6db-8qxtw\" (UID: \"46359908-c135-4bdb-a8c6-cd78df04dc7a\") " pod="calico-system/whisker-8c998d6db-8qxtw" Jan 28 00:56:25.934674 kubelet[2539]: I0128 00:56:25.933888 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xhs\" (UniqueName: \"kubernetes.io/projected/0d4e568d-f278-4de9-a835-c39874b224a5-kube-api-access-b8xhs\") pod \"calico-kube-controllers-5ff6b57675-s9qlm\" (UID: \"0d4e568d-f278-4de9-a835-c39874b224a5\") " pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" Jan 28 00:56:26.141550 kubelet[2539]: E0128 00:56:26.141340 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:26.143386 containerd[1472]: time="2026-01-28T00:56:26.143074242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bpv6m,Uid:52ba832a-f5ed-4839-b1f4-f7bf87e5f87b,Namespace:kube-system,Attempt:0,}" Jan 28 00:56:26.156281 containerd[1472]: time="2026-01-28T00:56:26.156185307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-2kx8s,Uid:71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:56:26.167119 kubelet[2539]: E0128 00:56:26.166995 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:26.167783 containerd[1472]: time="2026-01-28T00:56:26.167699049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z7h5v,Uid:db03c4f9-a045-4c3b-829c-be263339420b,Namespace:kube-system,Attempt:0,}" Jan 28 00:56:26.181274 containerd[1472]: time="2026-01-28T00:56:26.181209377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x77cp,Uid:0417d323-0fbe-457b-a078-73d52ee9f54e,Namespace:calico-system,Attempt:0,}" Jan 28 00:56:26.190645 containerd[1472]: time="2026-01-28T00:56:26.190335314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-zhbk8,Uid:db3e5b3d-8d48-4187-bdaf-770b7259aaa2,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:56:26.205014 containerd[1472]: time="2026-01-28T00:56:26.204061833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c998d6db-8qxtw,Uid:46359908-c135-4bdb-a8c6-cd78df04dc7a,Namespace:calico-system,Attempt:0,}" Jan 28 00:56:26.208550 containerd[1472]: time="2026-01-28T00:56:26.208440058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff6b57675-s9qlm,Uid:0d4e568d-f278-4de9-a835-c39874b224a5,Namespace:calico-system,Attempt:0,}" Jan 28 00:56:26.402124 kubelet[2539]: E0128 00:56:26.401967 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:26.404649 containerd[1472]: time="2026-01-28T00:56:26.403188033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 00:56:26.444266 containerd[1472]: time="2026-01-28T00:56:26.444169452Z" level=error msg="Failed to destroy network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.445095 containerd[1472]: time="2026-01-28T00:56:26.445017675Z" level=error msg="encountered an error cleaning up failed sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.445224 containerd[1472]: time="2026-01-28T00:56:26.445151216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-2kx8s,Uid:71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.445962 containerd[1472]: time="2026-01-28T00:56:26.445789771Z" level=error msg="Failed to destroy network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.449257 containerd[1472]: time="2026-01-28T00:56:26.449223076Z" level=error msg="encountered an error cleaning up failed sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.449512 containerd[1472]: time="2026-01-28T00:56:26.449395211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bpv6m,Uid:52ba832a-f5ed-4839-b1f4-f7bf87e5f87b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.457731 kubelet[2539]: E0128 00:56:26.457661 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.457841 kubelet[2539]: E0128 00:56:26.457779 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bpv6m" Jan 28 00:56:26.457841 kubelet[2539]: E0128 00:56:26.457818 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bpv6m" Jan 28 00:56:26.457971 kubelet[2539]: E0128 00:56:26.457849 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.457971 kubelet[2539]: E0128 00:56:26.457876 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-bpv6m_kube-system(52ba832a-f5ed-4839-b1f4-f7bf87e5f87b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-bpv6m_kube-system(52ba832a-f5ed-4839-b1f4-f7bf87e5f87b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bpv6m" podUID="52ba832a-f5ed-4839-b1f4-f7bf87e5f87b" Jan 28 00:56:26.458085 kubelet[2539]: E0128 00:56:26.457983 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" Jan 28 00:56:26.458085 kubelet[2539]: E0128 00:56:26.458056 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" Jan 28 00:56:26.458143 kubelet[2539]: E0128 00:56:26.458114 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f958c6dc-2kx8s_calico-apiserver(71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f958c6dc-2kx8s_calico-apiserver(71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:56:26.499556 containerd[1472]: time="2026-01-28T00:56:26.498853980Z" level=error msg="Failed to destroy network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.505685 containerd[1472]: time="2026-01-28T00:56:26.505515242Z" level=error msg="encountered an error cleaning up failed sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.505832 containerd[1472]: time="2026-01-28T00:56:26.505660785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z7h5v,Uid:db03c4f9-a045-4c3b-829c-be263339420b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.506476 kubelet[2539]: E0128 00:56:26.506352 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.506635 kubelet[2539]: E0128 00:56:26.506600 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-z7h5v" Jan 28 00:56:26.506839 kubelet[2539]: E0128 00:56:26.506738 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-z7h5v" Jan 28 00:56:26.507447 kubelet[2539]: E0128 00:56:26.507334 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-z7h5v_kube-system(db03c4f9-a045-4c3b-829c-be263339420b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-z7h5v_kube-system(db03c4f9-a045-4c3b-829c-be263339420b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-z7h5v" podUID="db03c4f9-a045-4c3b-829c-be263339420b" Jan 28 00:56:26.515763 containerd[1472]: time="2026-01-28T00:56:26.515512660Z" level=error msg="Failed to destroy network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.518575 containerd[1472]: time="2026-01-28T00:56:26.517835540Z" level=error msg="encountered an error cleaning up failed sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.518575 containerd[1472]: time="2026-01-28T00:56:26.517980262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c998d6db-8qxtw,Uid:46359908-c135-4bdb-a8c6-cd78df04dc7a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.519193 kubelet[2539]: E0128 00:56:26.518237 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.519193 kubelet[2539]: E0128 00:56:26.518309 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c998d6db-8qxtw" Jan 28 00:56:26.519193 kubelet[2539]: E0128 00:56:26.518337 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c998d6db-8qxtw" Jan 28 00:56:26.519508 kubelet[2539]: E0128 00:56:26.518453 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8c998d6db-8qxtw_calico-system(46359908-c135-4bdb-a8c6-cd78df04dc7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8c998d6db-8qxtw_calico-system(46359908-c135-4bdb-a8c6-cd78df04dc7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8c998d6db-8qxtw" podUID="46359908-c135-4bdb-a8c6-cd78df04dc7a" Jan 28 00:56:26.524006 containerd[1472]: time="2026-01-28T00:56:26.523948113Z" level=error msg="Failed to destroy network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.524359 containerd[1472]: time="2026-01-28T00:56:26.524328380Z" level=error msg="Failed to destroy network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.526308 containerd[1472]: time="2026-01-28T00:56:26.526253789Z" level=error msg="encountered an error cleaning up failed sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.526408 containerd[1472]: time="2026-01-28T00:56:26.526338284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x77cp,Uid:0417d323-0fbe-457b-a078-73d52ee9f54e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.526502 containerd[1472]: time="2026-01-28T00:56:26.526351505Z" level=error msg="encountered an error cleaning up failed sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.526738 containerd[1472]: time="2026-01-28T00:56:26.526488617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff6b57675-s9qlm,Uid:0d4e568d-f278-4de9-a835-c39874b224a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.526866 kubelet[2539]: E0128 00:56:26.526617 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.526866 kubelet[2539]: E0128 00:56:26.526673 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-x77cp" Jan 28 00:56:26.526866 kubelet[2539]: E0128 00:56:26.526693 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-x77cp" Jan 28 00:56:26.527084 containerd[1472]: time="2026-01-28T00:56:26.526838705Z" level=error msg="Failed to destroy network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.527168 kubelet[2539]: E0128 00:56:26.526740 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-x77cp_calico-system(0417d323-0fbe-457b-a078-73d52ee9f54e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-x77cp_calico-system(0417d323-0fbe-457b-a078-73d52ee9f54e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:56:26.527443 kubelet[2539]: E0128 00:56:26.527385 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.527625 kubelet[2539]: E0128 00:56:26.527461 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" Jan 28 00:56:26.527625 kubelet[2539]: E0128 00:56:26.527486 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" Jan 28 00:56:26.527625 kubelet[2539]: E0128 00:56:26.527588 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5ff6b57675-s9qlm_calico-system(0d4e568d-f278-4de9-a835-c39874b224a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5ff6b57675-s9qlm_calico-system(0d4e568d-f278-4de9-a835-c39874b224a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:56:26.528009 containerd[1472]: time="2026-01-28T00:56:26.527838987Z" level=error msg="encountered an error cleaning up failed sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.528108 containerd[1472]: time="2026-01-28T00:56:26.528025760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-zhbk8,Uid:db3e5b3d-8d48-4187-bdaf-770b7259aaa2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.528597 kubelet[2539]: E0128 00:56:26.528431 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:26.528597 kubelet[2539]: E0128 00:56:26.528471 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" Jan 28 00:56:26.528597 kubelet[2539]: E0128 00:56:26.528492 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" Jan 28 00:56:26.528690 kubelet[2539]: E0128 00:56:26.528579 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f958c6dc-zhbk8_calico-apiserver(db3e5b3d-8d48-4187-bdaf-770b7259aaa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f958c6dc-zhbk8_calico-apiserver(db3e5b3d-8d48-4187-bdaf-770b7259aaa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:56:26.915485 systemd[1]: Created slice kubepods-besteffort-podbc0d5231_81be_4bd5_ba52_4066772e339a.slice - libcontainer container kubepods-besteffort-podbc0d5231_81be_4bd5_ba52_4066772e339a.slice. Jan 28 00:56:26.925055 containerd[1472]: time="2026-01-28T00:56:26.925021265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5dcgj,Uid:bc0d5231-81be-4bd5-ba52-4066772e339a,Namespace:calico-system,Attempt:0,}" Jan 28 00:56:27.058320 containerd[1472]: time="2026-01-28T00:56:27.058217384Z" level=error msg="Failed to destroy network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.059003 containerd[1472]: time="2026-01-28T00:56:27.058849777Z" level=error msg="encountered an error cleaning up failed sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.059080 containerd[1472]: time="2026-01-28T00:56:27.059004219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5dcgj,Uid:bc0d5231-81be-4bd5-ba52-4066772e339a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.059494 kubelet[2539]: E0128 00:56:27.059395 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.060058 kubelet[2539]: E0128 00:56:27.059512 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5dcgj" Jan 28 00:56:27.060058 kubelet[2539]: E0128 00:56:27.059546 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5dcgj" Jan 28 00:56:27.060058 kubelet[2539]: E0128 00:56:27.059662 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:27.065120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d-shm.mount: Deactivated successfully. Jan 28 00:56:27.405465 kubelet[2539]: I0128 00:56:27.405281 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:27.408217 kubelet[2539]: I0128 00:56:27.408028 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:27.414106 kubelet[2539]: I0128 00:56:27.412343 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:27.416378 kubelet[2539]: I0128 00:56:27.415060 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:27.461819 containerd[1472]: time="2026-01-28T00:56:27.461764805Z" level=info msg="StopPodSandbox for \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\"" Jan 28 00:56:27.463323 containerd[1472]: time="2026-01-28T00:56:27.463277787Z" level=info msg="StopPodSandbox for \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\"" Jan 28 00:56:27.479931 containerd[1472]: time="2026-01-28T00:56:27.478569200Z" level=info msg="StopPodSandbox for \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\"" Jan 28 00:56:27.479931 containerd[1472]: time="2026-01-28T00:56:27.479020315Z" level=info msg="StopPodSandbox for \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\"" Jan 28 00:56:27.481758 containerd[1472]: time="2026-01-28T00:56:27.481294498Z" level=info msg="Ensure that sandbox a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80 in task-service has been cleanup successfully" Jan 28 00:56:27.486330 containerd[1472]: time="2026-01-28T00:56:27.486249375Z" level=info msg="Ensure that sandbox 1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99 in task-service has been cleanup successfully" Jan 28 00:56:27.495994 containerd[1472]: time="2026-01-28T00:56:27.495880837Z" level=info msg="Ensure that sandbox 7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d in task-service has been cleanup successfully" Jan 28 00:56:27.498806 containerd[1472]: time="2026-01-28T00:56:27.498665489Z" level=info msg="Ensure that sandbox 0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43 in task-service has been cleanup successfully" Jan 28 00:56:27.501358 kubelet[2539]: I0128 00:56:27.500845 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:27.504472 containerd[1472]: time="2026-01-28T00:56:27.504083931Z" level=info msg="StopPodSandbox for \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\"" Jan 28 00:56:27.505156 containerd[1472]: time="2026-01-28T00:56:27.504777944Z" level=info msg="Ensure that sandbox cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d in task-service has been cleanup successfully" Jan 28 00:56:27.511036 kubelet[2539]: I0128 00:56:27.510968 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:27.514380 containerd[1472]: time="2026-01-28T00:56:27.514348352Z" level=info msg="StopPodSandbox for \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\"" Jan 28 00:56:27.514735 containerd[1472]: time="2026-01-28T00:56:27.514708706Z" level=info msg="Ensure that sandbox 9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27 in task-service has been cleanup successfully" Jan 28 00:56:27.517304 kubelet[2539]: I0128 00:56:27.517236 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:27.519657 containerd[1472]: time="2026-01-28T00:56:27.519550175Z" level=info msg="StopPodSandbox for \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\"" Jan 28 00:56:27.523100 containerd[1472]: time="2026-01-28T00:56:27.523017905Z" level=info msg="Ensure that sandbox e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c in task-service has been cleanup successfully" Jan 28 00:56:27.526746 kubelet[2539]: I0128 00:56:27.526227 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:27.531063 containerd[1472]: time="2026-01-28T00:56:27.531015077Z" level=info msg="StopPodSandbox for \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\"" Jan 28 00:56:27.532097 containerd[1472]: time="2026-01-28T00:56:27.532046468Z" level=info msg="Ensure that sandbox 8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a in task-service has been cleanup successfully" Jan 28 00:56:27.643516 containerd[1472]: time="2026-01-28T00:56:27.643462814Z" level=error msg="StopPodSandbox for \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\" failed" error="failed to destroy network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.644381 kubelet[2539]: E0128 00:56:27.644167 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:27.644381 kubelet[2539]: E0128 00:56:27.644255 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c"} Jan 28 00:56:27.644381 kubelet[2539]: E0128 00:56:27.644314 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d4e568d-f278-4de9-a835-c39874b224a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.644381 kubelet[2539]: E0128 00:56:27.644341 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d4e568d-f278-4de9-a835-c39874b224a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:56:27.644821 containerd[1472]: time="2026-01-28T00:56:27.644581165Z" level=error msg="StopPodSandbox for \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\" failed" error="failed to destroy network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.645178 kubelet[2539]: E0128 00:56:27.645053 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:27.645178 kubelet[2539]: E0128 00:56:27.645100 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99"} Jan 28 00:56:27.645178 kubelet[2539]: E0128 00:56:27.645122 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0417d323-0fbe-457b-a078-73d52ee9f54e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.645178 kubelet[2539]: E0128 00:56:27.645154 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0417d323-0fbe-457b-a078-73d52ee9f54e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:56:27.646699 containerd[1472]: time="2026-01-28T00:56:27.646517286Z" level=error msg="StopPodSandbox for \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\" failed" error="failed to destroy network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.646940 kubelet[2539]: E0128 00:56:27.646819 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:27.646940 kubelet[2539]: E0128 00:56:27.646880 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d"} Jan 28 00:56:27.647014 kubelet[2539]: E0128 00:56:27.646974 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc0d5231-81be-4bd5-ba52-4066772e339a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.647090 kubelet[2539]: E0128 00:56:27.647003 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc0d5231-81be-4bd5-ba52-4066772e339a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:27.651117 containerd[1472]: time="2026-01-28T00:56:27.650991193Z" level=error msg="StopPodSandbox for \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\" failed" error="failed to destroy network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.651246 kubelet[2539]: E0128 00:56:27.651185 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:27.651246 kubelet[2539]: E0128 00:56:27.651219 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43"} Jan 28 00:56:27.651246 kubelet[2539]: E0128 00:56:27.651240 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46359908-c135-4bdb-a8c6-cd78df04dc7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.651570 kubelet[2539]: E0128 00:56:27.651263 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46359908-c135-4bdb-a8c6-cd78df04dc7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8c998d6db-8qxtw" podUID="46359908-c135-4bdb-a8c6-cd78df04dc7a" Jan 28 00:56:27.654616 containerd[1472]: time="2026-01-28T00:56:27.654534448Z" level=error msg="StopPodSandbox for \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\" failed" error="failed to destroy network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.655045 kubelet[2539]: E0128 00:56:27.654983 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:27.655151 kubelet[2539]: E0128 00:56:27.655060 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d"} Jan 28 00:56:27.655242 kubelet[2539]: E0128 00:56:27.655206 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.655354 kubelet[2539]: E0128 00:56:27.655255 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:56:27.658362 containerd[1472]: time="2026-01-28T00:56:27.658201237Z" level=error msg="StopPodSandbox for \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\" failed" error="failed to destroy network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.658463 kubelet[2539]: E0128 00:56:27.658420 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:27.658742 kubelet[2539]: E0128 00:56:27.658467 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80"} Jan 28 00:56:27.658742 kubelet[2539]: E0128 00:56:27.658504 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db03c4f9-a045-4c3b-829c-be263339420b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.658742 kubelet[2539]: E0128 00:56:27.658542 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db03c4f9-a045-4c3b-829c-be263339420b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-z7h5v" podUID="db03c4f9-a045-4c3b-829c-be263339420b" Jan 28 00:56:27.664194 containerd[1472]: time="2026-01-28T00:56:27.664080505Z" level=error msg="StopPodSandbox for \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\" failed" error="failed to destroy network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.664793 kubelet[2539]: E0128 00:56:27.664706 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:27.664863 kubelet[2539]: E0128 00:56:27.664798 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27"} Jan 28 00:56:27.664863 kubelet[2539]: E0128 00:56:27.664822 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.664863 kubelet[2539]: E0128 00:56:27.664845 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bpv6m" podUID="52ba832a-f5ed-4839-b1f4-f7bf87e5f87b" Jan 28 00:56:27.668543 containerd[1472]: time="2026-01-28T00:56:27.668408222Z" level=error msg="StopPodSandbox for \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\" failed" error="failed to destroy network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:56:27.668865 kubelet[2539]: E0128 00:56:27.668787 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:27.668865 kubelet[2539]: E0128 00:56:27.668842 2539 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a"} Jan 28 00:56:27.668865 kubelet[2539]: E0128 00:56:27.668866 2539 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db3e5b3d-8d48-4187-bdaf-770b7259aaa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:56:27.669160 kubelet[2539]: E0128 00:56:27.668888 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db3e5b3d-8d48-4187-bdaf-770b7259aaa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:56:32.759213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834839792.mount: Deactivated successfully. Jan 28 00:56:33.087221 containerd[1472]: time="2026-01-28T00:56:33.086982178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:33.088879 containerd[1472]: time="2026-01-28T00:56:33.088780187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 00:56:33.090994 containerd[1472]: time="2026-01-28T00:56:33.090837896Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:33.094427 containerd[1472]: time="2026-01-28T00:56:33.094301858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:56:33.095511 containerd[1472]: time="2026-01-28T00:56:33.095408374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.692148035s" Jan 28 00:56:33.095602 containerd[1472]: time="2026-01-28T00:56:33.095493266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 00:56:33.116818 containerd[1472]: time="2026-01-28T00:56:33.116764972Z" level=info msg="CreateContainer within sandbox \"0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 00:56:33.173882 containerd[1472]: time="2026-01-28T00:56:33.173725212Z" level=info msg="CreateContainer within sandbox \"0196f40a726c11ec9425d40d84ed707fecfd4e6697944eb317f66401b8a89f12\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9476e0f131d3d90908282b0a23088a48a9a0e026d5f7da3a2deccc23ede61871\"" Jan 28 00:56:33.175618 containerd[1472]: time="2026-01-28T00:56:33.175572493Z" level=info msg="StartContainer for \"9476e0f131d3d90908282b0a23088a48a9a0e026d5f7da3a2deccc23ede61871\"" Jan 28 00:56:33.337500 systemd[1]: Started cri-containerd-9476e0f131d3d90908282b0a23088a48a9a0e026d5f7da3a2deccc23ede61871.scope - libcontainer container 9476e0f131d3d90908282b0a23088a48a9a0e026d5f7da3a2deccc23ede61871. Jan 28 00:56:33.417679 containerd[1472]: time="2026-01-28T00:56:33.417455979Z" level=info msg="StartContainer for \"9476e0f131d3d90908282b0a23088a48a9a0e026d5f7da3a2deccc23ede61871\" returns successfully" Jan 28 00:56:33.568840 kubelet[2539]: E0128 00:56:33.568691 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:33.599399 kubelet[2539]: I0128 00:56:33.595772 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vcr9v" podStartSLOduration=1.601207074 podStartE2EDuration="17.595752362s" podCreationTimestamp="2026-01-28 00:56:16 +0000 UTC" firstStartedPulling="2026-01-28 00:56:17.102278562 +0000 UTC m=+25.443091700" lastFinishedPulling="2026-01-28 00:56:33.09682385 +0000 UTC m=+41.437636988" observedRunningTime="2026-01-28 00:56:33.595540365 +0000 UTC m=+41.936353503" watchObservedRunningTime="2026-01-28 00:56:33.595752362 +0000 UTC m=+41.936565500" Jan 28 00:56:33.650433 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 00:56:33.651489 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 00:56:33.810025 containerd[1472]: time="2026-01-28T00:56:33.809866853Z" level=info msg="StopPodSandbox for \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\"" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:33.981 [INFO][3833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:33.982 [INFO][3833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" iface="eth0" netns="/var/run/netns/cni-993ebe17-a6ef-5f6c-cc24-16ab1f661dcd" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:33.982 [INFO][3833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" iface="eth0" netns="/var/run/netns/cni-993ebe17-a6ef-5f6c-cc24-16ab1f661dcd" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:33.983 [INFO][3833] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" iface="eth0" netns="/var/run/netns/cni-993ebe17-a6ef-5f6c-cc24-16ab1f661dcd" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:33.983 [INFO][3833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:33.984 [INFO][3833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:34.219 [INFO][3844] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:34.220 [INFO][3844] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:34.220 [INFO][3844] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:34.260 [WARNING][3844] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:34.260 [INFO][3844] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:34.263 [INFO][3844] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:34.273740 containerd[1472]: 2026-01-28 00:56:34.270 [INFO][3833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:34.275844 containerd[1472]: time="2026-01-28T00:56:34.274305568Z" level=info msg="TearDown network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\" successfully" Jan 28 00:56:34.275844 containerd[1472]: time="2026-01-28T00:56:34.274601235Z" level=info msg="StopPodSandbox for \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\" returns successfully" Jan 28 00:56:34.279416 systemd[1]: run-netns-cni\x2d993ebe17\x2da6ef\x2d5f6c\x2dcc24\x2d16ab1f661dcd.mount: Deactivated successfully. Jan 28 00:56:34.329645 kubelet[2539]: I0128 00:56:34.328765 2539 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-backend-key-pair\") pod \"46359908-c135-4bdb-a8c6-cd78df04dc7a\" (UID: \"46359908-c135-4bdb-a8c6-cd78df04dc7a\") " Jan 28 00:56:34.329645 kubelet[2539]: I0128 00:56:34.328838 2539 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-ca-bundle\") pod \"46359908-c135-4bdb-a8c6-cd78df04dc7a\" (UID: \"46359908-c135-4bdb-a8c6-cd78df04dc7a\") " Jan 28 00:56:34.329645 kubelet[2539]: I0128 00:56:34.328870 2539 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28nxd\" (UniqueName: \"kubernetes.io/projected/46359908-c135-4bdb-a8c6-cd78df04dc7a-kube-api-access-28nxd\") pod \"46359908-c135-4bdb-a8c6-cd78df04dc7a\" (UID: \"46359908-c135-4bdb-a8c6-cd78df04dc7a\") " Jan 28 00:56:34.332072 kubelet[2539]: I0128 00:56:34.331081 2539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "46359908-c135-4bdb-a8c6-cd78df04dc7a" (UID: "46359908-c135-4bdb-a8c6-cd78df04dc7a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:56:34.350068 kubelet[2539]: I0128 00:56:34.349839 2539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "46359908-c135-4bdb-a8c6-cd78df04dc7a" (UID: "46359908-c135-4bdb-a8c6-cd78df04dc7a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 00:56:34.351671 kubelet[2539]: I0128 00:56:34.351586 2539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46359908-c135-4bdb-a8c6-cd78df04dc7a-kube-api-access-28nxd" (OuterVolumeSpecName: "kube-api-access-28nxd") pod "46359908-c135-4bdb-a8c6-cd78df04dc7a" (UID: "46359908-c135-4bdb-a8c6-cd78df04dc7a"). InnerVolumeSpecName "kube-api-access-28nxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:56:34.352878 systemd[1]: var-lib-kubelet-pods-46359908\x2dc135\x2d4bdb\x2da8c6\x2dcd78df04dc7a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 00:56:34.357204 systemd[1]: var-lib-kubelet-pods-46359908\x2dc135\x2d4bdb\x2da8c6\x2dcd78df04dc7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d28nxd.mount: Deactivated successfully. Jan 28 00:56:34.432115 kubelet[2539]: I0128 00:56:34.430706 2539 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 00:56:34.432115 kubelet[2539]: I0128 00:56:34.430850 2539 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46359908-c135-4bdb-a8c6-cd78df04dc7a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 00:56:34.432115 kubelet[2539]: I0128 00:56:34.430872 2539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-28nxd\" (UniqueName: \"kubernetes.io/projected/46359908-c135-4bdb-a8c6-cd78df04dc7a-kube-api-access-28nxd\") on node \"localhost\" DevicePath \"\"" Jan 28 00:56:34.580627 kubelet[2539]: I0128 00:56:34.578529 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 00:56:34.580627 kubelet[2539]: E0128 00:56:34.579694 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:34.593964 systemd[1]: Removed slice kubepods-besteffort-pod46359908_c135_4bdb_a8c6_cd78df04dc7a.slice - libcontainer container kubepods-besteffort-pod46359908_c135_4bdb_a8c6_cd78df04dc7a.slice. Jan 28 00:56:34.732166 systemd[1]: Created slice kubepods-besteffort-pod251978dd_1b11_4c38_8024_bd42a42999a9.slice - libcontainer container kubepods-besteffort-pod251978dd_1b11_4c38_8024_bd42a42999a9.slice. Jan 28 00:56:34.745656 kubelet[2539]: I0128 00:56:34.745603 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/251978dd-1b11-4c38-8024-bd42a42999a9-whisker-backend-key-pair\") pod \"whisker-864d6fb5f6-7sb5q\" (UID: \"251978dd-1b11-4c38-8024-bd42a42999a9\") " pod="calico-system/whisker-864d6fb5f6-7sb5q" Jan 28 00:56:34.749984 kubelet[2539]: I0128 00:56:34.747303 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/251978dd-1b11-4c38-8024-bd42a42999a9-whisker-ca-bundle\") pod \"whisker-864d6fb5f6-7sb5q\" (UID: \"251978dd-1b11-4c38-8024-bd42a42999a9\") " pod="calico-system/whisker-864d6fb5f6-7sb5q" Jan 28 00:56:34.750428 kubelet[2539]: I0128 00:56:34.750264 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gttd9\" (UniqueName: \"kubernetes.io/projected/251978dd-1b11-4c38-8024-bd42a42999a9-kube-api-access-gttd9\") pod \"whisker-864d6fb5f6-7sb5q\" (UID: \"251978dd-1b11-4c38-8024-bd42a42999a9\") " pod="calico-system/whisker-864d6fb5f6-7sb5q" Jan 28 00:56:35.056356 containerd[1472]: time="2026-01-28T00:56:35.056138658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864d6fb5f6-7sb5q,Uid:251978dd-1b11-4c38-8024-bd42a42999a9,Namespace:calico-system,Attempt:0,}" Jan 28 00:56:35.291059 systemd-networkd[1396]: cali2c2941914c9: Link UP Jan 28 00:56:35.293354 systemd-networkd[1396]: cali2c2941914c9: Gained carrier Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.134 [INFO][3866] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.159 [INFO][3866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0 whisker-864d6fb5f6- calico-system 251978dd-1b11-4c38-8024-bd42a42999a9 969 0 2026-01-28 00:56:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:864d6fb5f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-864d6fb5f6-7sb5q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2c2941914c9 [] [] }} ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.159 [INFO][3866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.204 [INFO][3881] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" HandleID="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Workload="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.204 [INFO][3881] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" HandleID="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Workload="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-864d6fb5f6-7sb5q", "timestamp":"2026-01-28 00:56:35.204515601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.204 [INFO][3881] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.205 [INFO][3881] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.205 [INFO][3881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.219 [INFO][3881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.231 [INFO][3881] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.241 [INFO][3881] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.244 [INFO][3881] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.248 [INFO][3881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.249 [INFO][3881] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.252 [INFO][3881] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7 Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.261 [INFO][3881] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.271 [INFO][3881] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.271 [INFO][3881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" host="localhost" Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.271 [INFO][3881] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:35.334079 containerd[1472]: 2026-01-28 00:56:35.271 [INFO][3881] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" HandleID="k8s-pod-network.94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Workload="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" Jan 28 00:56:35.342673 containerd[1472]: 2026-01-28 00:56:35.275 [INFO][3866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0", GenerateName:"whisker-864d6fb5f6-", Namespace:"calico-system", SelfLink:"", UID:"251978dd-1b11-4c38-8024-bd42a42999a9", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"864d6fb5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-864d6fb5f6-7sb5q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c2941914c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:35.342673 containerd[1472]: 2026-01-28 00:56:35.276 [INFO][3866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" Jan 28 00:56:35.342673 containerd[1472]: 2026-01-28 00:56:35.276 [INFO][3866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c2941914c9 ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" Jan 28 00:56:35.342673 containerd[1472]: 2026-01-28 00:56:35.291 [INFO][3866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" Jan 28 00:56:35.342673 containerd[1472]: 2026-01-28 00:56:35.292 [INFO][3866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0", GenerateName:"whisker-864d6fb5f6-", Namespace:"calico-system", SelfLink:"", UID:"251978dd-1b11-4c38-8024-bd42a42999a9", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"864d6fb5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7", Pod:"whisker-864d6fb5f6-7sb5q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c2941914c9", MAC:"f2:b3:02:93:39:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:35.342673 containerd[1472]: 2026-01-28 00:56:35.326 [INFO][3866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7" Namespace="calico-system" Pod="whisker-864d6fb5f6-7sb5q" WorkloadEndpoint="localhost-k8s-whisker--864d6fb5f6--7sb5q-eth0" Jan 28 00:56:35.439779 containerd[1472]: time="2026-01-28T00:56:35.439534587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:35.440387 containerd[1472]: time="2026-01-28T00:56:35.439784103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:35.440387 containerd[1472]: time="2026-01-28T00:56:35.439814246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:35.440387 containerd[1472]: time="2026-01-28T00:56:35.440027398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:35.502674 systemd[1]: Started cri-containerd-94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7.scope - libcontainer container 94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7. Jan 28 00:56:35.575525 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:35.666203 containerd[1472]: time="2026-01-28T00:56:35.666147515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864d6fb5f6-7sb5q,Uid:251978dd-1b11-4c38-8024-bd42a42999a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"94317afbfae672786aeb6bd244db5308e599e84c71a5a29de8c26462353cb7a7\"" Jan 28 00:56:35.673422 containerd[1472]: time="2026-01-28T00:56:35.673304026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:56:35.746197 containerd[1472]: time="2026-01-28T00:56:35.745996581Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:35.756808 containerd[1472]: time="2026-01-28T00:56:35.748646682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:56:35.757045 containerd[1472]: time="2026-01-28T00:56:35.748853910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:56:35.757499 kubelet[2539]: E0128 00:56:35.757387 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:56:35.758408 kubelet[2539]: E0128 00:56:35.758349 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:56:35.758702 kubelet[2539]: E0128 00:56:35.758659 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:35.760280 containerd[1472]: time="2026-01-28T00:56:35.760208999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:56:35.819632 containerd[1472]: time="2026-01-28T00:56:35.819486852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:35.824322 containerd[1472]: time="2026-01-28T00:56:35.824028379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:56:35.824322 containerd[1472]: time="2026-01-28T00:56:35.824140011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:56:35.824645 kubelet[2539]: E0128 00:56:35.824549 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:56:35.824645 kubelet[2539]: E0128 00:56:35.824604 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:56:35.825010 kubelet[2539]: E0128 00:56:35.824683 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:35.825010 kubelet[2539]: E0128 00:56:35.824742 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:56:35.908480 kubelet[2539]: I0128 00:56:35.908314 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46359908-c135-4bdb-a8c6-cd78df04dc7a" path="/var/lib/kubelet/pods/46359908-c135-4bdb-a8c6-cd78df04dc7a/volumes" Jan 28 00:56:36.599667 kubelet[2539]: E0128 00:56:36.599527 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:56:37.261416 systemd-networkd[1396]: cali2c2941914c9: Gained IPv6LL Jan 28 00:56:37.606253 kubelet[2539]: E0128 00:56:37.606196 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:56:37.906873 containerd[1472]: time="2026-01-28T00:56:37.906632507Z" level=info msg="StopPodSandbox for \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\"" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.097 [INFO][4101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.100 [INFO][4101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" iface="eth0" netns="/var/run/netns/cni-27b80c64-20b5-de6a-d5db-86b04c17872c" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.101 [INFO][4101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" iface="eth0" netns="/var/run/netns/cni-27b80c64-20b5-de6a-d5db-86b04c17872c" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.101 [INFO][4101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" iface="eth0" netns="/var/run/netns/cni-27b80c64-20b5-de6a-d5db-86b04c17872c" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.101 [INFO][4101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.101 [INFO][4101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.167 [INFO][4111] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.167 [INFO][4111] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.167 [INFO][4111] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.179 [WARNING][4111] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.179 [INFO][4111] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.183 [INFO][4111] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:38.191208 containerd[1472]: 2026-01-28 00:56:38.186 [INFO][4101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:38.192221 containerd[1472]: time="2026-01-28T00:56:38.191540861Z" level=info msg="TearDown network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\" successfully" Jan 28 00:56:38.192221 containerd[1472]: time="2026-01-28T00:56:38.191667860Z" level=info msg="StopPodSandbox for \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\" returns successfully" Jan 28 00:56:38.197295 systemd[1]: run-netns-cni\x2d27b80c64\x2d20b5\x2dde6a\x2dd5db\x2d86b04c17872c.mount: Deactivated successfully. Jan 28 00:56:38.200312 containerd[1472]: time="2026-01-28T00:56:38.200218525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-zhbk8,Uid:db3e5b3d-8d48-4187-bdaf-770b7259aaa2,Namespace:calico-apiserver,Attempt:1,}" Jan 28 00:56:38.448866 systemd-networkd[1396]: calieac8356445a: Link UP Jan 28 00:56:38.449516 systemd-networkd[1396]: calieac8356445a: Gained carrier Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.300 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.319 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0 calico-apiserver-8f958c6dc- calico-apiserver db3e5b3d-8d48-4187-bdaf-770b7259aaa2 997 0 2026-01-28 00:56:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f958c6dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f958c6dc-zhbk8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieac8356445a [] [] }} ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.319 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.369 [INFO][4135] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" HandleID="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.369 [INFO][4135] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" HandleID="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f958c6dc-zhbk8", "timestamp":"2026-01-28 00:56:38.369111879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.369 [INFO][4135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.369 [INFO][4135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.369 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.379 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.386 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.392 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.396 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.400 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.400 [INFO][4135] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.404 [INFO][4135] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5 Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.414 [INFO][4135] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.422 [INFO][4135] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.422 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" host="localhost" Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.422 [INFO][4135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:38.469664 containerd[1472]: 2026-01-28 00:56:38.422 [INFO][4135] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" HandleID="k8s-pod-network.38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.470534 containerd[1472]: 2026-01-28 00:56:38.433 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3e5b3d-8d48-4187-bdaf-770b7259aaa2", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f958c6dc-zhbk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieac8356445a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:38.470534 containerd[1472]: 2026-01-28 00:56:38.434 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.470534 containerd[1472]: 2026-01-28 00:56:38.434 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieac8356445a ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.470534 containerd[1472]: 2026-01-28 00:56:38.451 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.470534 containerd[1472]: 2026-01-28 00:56:38.451 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3e5b3d-8d48-4187-bdaf-770b7259aaa2", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5", Pod:"calico-apiserver-8f958c6dc-zhbk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieac8356445a", MAC:"9a:24:29:37:9c:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:38.470534 containerd[1472]: 2026-01-28 00:56:38.466 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-zhbk8" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:38.502020 containerd[1472]: time="2026-01-28T00:56:38.500090997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:38.502194 containerd[1472]: time="2026-01-28T00:56:38.501994972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:38.502194 containerd[1472]: time="2026-01-28T00:56:38.502047917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:38.502261 containerd[1472]: time="2026-01-28T00:56:38.502170318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:38.544236 systemd[1]: Started cri-containerd-38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5.scope - libcontainer container 38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5. Jan 28 00:56:38.565128 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:38.599015 containerd[1472]: time="2026-01-28T00:56:38.598959955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-zhbk8,Uid:db3e5b3d-8d48-4187-bdaf-770b7259aaa2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5\"" Jan 28 00:56:38.602860 containerd[1472]: time="2026-01-28T00:56:38.602490374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:56:38.699119 containerd[1472]: time="2026-01-28T00:56:38.699049926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:38.701858 containerd[1472]: time="2026-01-28T00:56:38.701587171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:56:38.701858 containerd[1472]: time="2026-01-28T00:56:38.701804423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:56:38.702505 kubelet[2539]: E0128 00:56:38.702426 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:38.704013 kubelet[2539]: E0128 00:56:38.703341 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:38.704013 kubelet[2539]: E0128 00:56:38.703476 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-zhbk8_calico-apiserver(db3e5b3d-8d48-4187-bdaf-770b7259aaa2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:38.704013 kubelet[2539]: E0128 00:56:38.703512 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:56:39.811706 kubelet[2539]: E0128 00:56:39.811476 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:56:40.399369 systemd-networkd[1396]: calieac8356445a: Gained IPv6LL Jan 28 00:56:40.882167 kubelet[2539]: I0128 00:56:40.882048 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 00:56:40.883146 kubelet[2539]: E0128 00:56:40.882639 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:40.905432 containerd[1472]: time="2026-01-28T00:56:40.905197428Z" level=info msg="StopPodSandbox for \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\"" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.023 [INFO][4243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.023 [INFO][4243] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" iface="eth0" netns="/var/run/netns/cni-edb848c4-1ce2-32b8-f627-63dbb9040df3" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.024 [INFO][4243] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" iface="eth0" netns="/var/run/netns/cni-edb848c4-1ce2-32b8-f627-63dbb9040df3" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.024 [INFO][4243] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" iface="eth0" netns="/var/run/netns/cni-edb848c4-1ce2-32b8-f627-63dbb9040df3" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.024 [INFO][4243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.024 [INFO][4243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.103 [INFO][4255] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.104 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.104 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.122 [WARNING][4255] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.122 [INFO][4255] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.154 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:41.168010 containerd[1472]: 2026-01-28 00:56:41.160 [INFO][4243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:41.169873 containerd[1472]: time="2026-01-28T00:56:41.168695587Z" level=info msg="TearDown network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\" successfully" Jan 28 00:56:41.169873 containerd[1472]: time="2026-01-28T00:56:41.168866967Z" level=info msg="StopPodSandbox for \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\" returns successfully" Jan 28 00:56:41.170706 systemd[1]: run-netns-cni\x2dedb848c4\x2d1ce2\x2d32b8\x2df627\x2d63dbb9040df3.mount: Deactivated successfully. Jan 28 00:56:41.202276 kubelet[2539]: E0128 00:56:41.202202 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:41.205542 containerd[1472]: time="2026-01-28T00:56:41.205426137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bpv6m,Uid:52ba832a-f5ed-4839-b1f4-f7bf87e5f87b,Namespace:kube-system,Attempt:1,}" Jan 28 00:56:41.539065 kernel: bpftool[4344]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 00:56:41.563320 systemd-networkd[1396]: cali8f85635ea5f: Link UP Jan 28 00:56:41.567448 systemd-networkd[1396]: cali8f85635ea5f: Gained carrier Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.347 [INFO][4294] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.392 [INFO][4294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--bpv6m-eth0 coredns-66bc5c9577- kube-system 52ba832a-f5ed-4839-b1f4-f7bf87e5f87b 1027 0 2026-01-28 00:55:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-bpv6m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f85635ea5f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.392 [INFO][4294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.478 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" HandleID="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.478 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" HandleID="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a9960), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-bpv6m", "timestamp":"2026-01-28 00:56:41.478075054 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.478 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.478 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.478 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.490 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.499 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.513 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.517 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.522 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.522 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.525 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.543 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.552 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.552 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" host="localhost" Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.552 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:41.608054 containerd[1472]: 2026-01-28 00:56:41.552 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" HandleID="k8s-pod-network.4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.611117 containerd[1472]: 2026-01-28 00:56:41.557 [INFO][4294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bpv6m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-bpv6m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f85635ea5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:41.611117 containerd[1472]: 2026-01-28 00:56:41.557 [INFO][4294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.611117 containerd[1472]: 2026-01-28 00:56:41.557 [INFO][4294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f85635ea5f ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.611117 containerd[1472]: 2026-01-28 00:56:41.575 [INFO][4294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.611117 containerd[1472]: 2026-01-28 00:56:41.578 [INFO][4294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bpv6m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a", Pod:"coredns-66bc5c9577-bpv6m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f85635ea5f", MAC:"0e:f4:7f:c3:61:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:41.611117 containerd[1472]: 2026-01-28 00:56:41.599 [INFO][4294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a" Namespace="kube-system" Pod="coredns-66bc5c9577-bpv6m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:41.670136 containerd[1472]: time="2026-01-28T00:56:41.669449823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:41.670136 containerd[1472]: time="2026-01-28T00:56:41.669562187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:41.670136 containerd[1472]: time="2026-01-28T00:56:41.669580280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:41.670136 containerd[1472]: time="2026-01-28T00:56:41.669753474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:41.713519 kubelet[2539]: E0128 00:56:41.712875 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:41.736607 systemd[1]: Started cri-containerd-4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a.scope - libcontainer container 4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a. Jan 28 00:56:41.774228 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:41.833618 containerd[1472]: time="2026-01-28T00:56:41.831576474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bpv6m,Uid:52ba832a-f5ed-4839-b1f4-f7bf87e5f87b,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a\"" Jan 28 00:56:41.841590 kubelet[2539]: E0128 00:56:41.841487 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:41.849456 containerd[1472]: time="2026-01-28T00:56:41.849253428Z" level=info msg="CreateContainer within sandbox \"4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:56:41.906204 containerd[1472]: time="2026-01-28T00:56:41.905375686Z" level=info msg="StopPodSandbox for \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\"" Jan 28 00:56:41.918187 containerd[1472]: time="2026-01-28T00:56:41.918081971Z" level=info msg="StopPodSandbox for \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\"" Jan 28 00:56:41.988492 containerd[1472]: time="2026-01-28T00:56:41.988385520Z" level=info msg="CreateContainer within sandbox \"4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc9d866a87187f216cd93a7767db59f0d3ce691cd5883a97c7a3a5168a49cb80\"" Jan 28 00:56:41.990281 containerd[1472]: time="2026-01-28T00:56:41.990204825Z" level=info msg="StartContainer for \"bc9d866a87187f216cd93a7767db59f0d3ce691cd5883a97c7a3a5168a49cb80\"" Jan 28 00:56:42.075233 systemd[1]: Started cri-containerd-bc9d866a87187f216cd93a7767db59f0d3ce691cd5883a97c7a3a5168a49cb80.scope - libcontainer container bc9d866a87187f216cd93a7767db59f0d3ce691cd5883a97c7a3a5168a49cb80. Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.019 [INFO][4418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.020 [INFO][4418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" iface="eth0" netns="/var/run/netns/cni-336f489a-500d-6ab5-f0b3-7238cc70666a" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.021 [INFO][4418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" iface="eth0" netns="/var/run/netns/cni-336f489a-500d-6ab5-f0b3-7238cc70666a" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.021 [INFO][4418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" iface="eth0" netns="/var/run/netns/cni-336f489a-500d-6ab5-f0b3-7238cc70666a" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.021 [INFO][4418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.021 [INFO][4418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.102 [INFO][4441] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.103 [INFO][4441] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.103 [INFO][4441] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.152 [WARNING][4441] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.154 [INFO][4441] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.160 [INFO][4441] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:42.187461 containerd[1472]: 2026-01-28 00:56:42.172 [INFO][4418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:42.187461 containerd[1472]: time="2026-01-28T00:56:42.186878259Z" level=info msg="TearDown network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\" successfully" Jan 28 00:56:42.187461 containerd[1472]: time="2026-01-28T00:56:42.186985443Z" level=info msg="StopPodSandbox for \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\" returns successfully" Jan 28 00:56:42.196195 systemd[1]: run-netns-cni\x2d336f489a\x2d500d\x2d6ab5\x2df0b3\x2d7238cc70666a.mount: Deactivated successfully. Jan 28 00:56:42.207291 containerd[1472]: time="2026-01-28T00:56:42.206578475Z" level=info msg="StartContainer for \"bc9d866a87187f216cd93a7767db59f0d3ce691cd5883a97c7a3a5168a49cb80\" returns successfully" Jan 28 00:56:42.208427 containerd[1472]: time="2026-01-28T00:56:42.208347894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-2kx8s,Uid:71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4,Namespace:calico-apiserver,Attempt:1,}" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.017 [INFO][4414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.018 [INFO][4414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" iface="eth0" netns="/var/run/netns/cni-45f5e2c7-89ac-9768-2e31-94069437e0b7" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.020 [INFO][4414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" iface="eth0" netns="/var/run/netns/cni-45f5e2c7-89ac-9768-2e31-94069437e0b7" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.023 [INFO][4414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" iface="eth0" netns="/var/run/netns/cni-45f5e2c7-89ac-9768-2e31-94069437e0b7" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.024 [INFO][4414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.024 [INFO][4414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.108 [INFO][4449] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.108 [INFO][4449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.161 [INFO][4449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.198 [WARNING][4449] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.198 [INFO][4449] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.204 [INFO][4449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:42.224191 containerd[1472]: 2026-01-28 00:56:42.215 [INFO][4414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:42.247092 containerd[1472]: time="2026-01-28T00:56:42.224829021Z" level=info msg="TearDown network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\" successfully" Jan 28 00:56:42.247092 containerd[1472]: time="2026-01-28T00:56:42.225567741Z" level=info msg="StopPodSandbox for \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\" returns successfully" Jan 28 00:56:42.245318 systemd[1]: run-netns-cni\x2d45f5e2c7\x2d89ac\x2d9768\x2d2e31\x2d94069437e0b7.mount: Deactivated successfully. Jan 28 00:56:42.247411 systemd-networkd[1396]: vxlan.calico: Link UP Jan 28 00:56:42.247422 systemd-networkd[1396]: vxlan.calico: Gained carrier Jan 28 00:56:42.257080 containerd[1472]: time="2026-01-28T00:56:42.256720119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff6b57675-s9qlm,Uid:0d4e568d-f278-4de9-a835-c39874b224a5,Namespace:calico-system,Attempt:1,}" Jan 28 00:56:42.719230 kubelet[2539]: E0128 00:56:42.719159 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:42.788626 kubelet[2539]: I0128 00:56:42.788493 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bpv6m" podStartSLOduration=45.788468975 podStartE2EDuration="45.788468975s" podCreationTimestamp="2026-01-28 00:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:56:42.787326693 +0000 UTC m=+51.128139871" watchObservedRunningTime="2026-01-28 00:56:42.788468975 +0000 UTC m=+51.129282133" Jan 28 00:56:42.800034 systemd-networkd[1396]: cali08ec3ab47da: Link UP Jan 28 00:56:42.802086 systemd-networkd[1396]: cali08ec3ab47da: Gained carrier Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.454 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0 calico-apiserver-8f958c6dc- calico-apiserver 71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4 1039 0 2026-01-28 00:56:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f958c6dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f958c6dc-2kx8s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali08ec3ab47da [] [] }} ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.457 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.589 [INFO][4552] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" HandleID="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.590 [INFO][4552] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" HandleID="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f958c6dc-2kx8s", "timestamp":"2026-01-28 00:56:42.588997064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.590 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.590 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.590 [INFO][4552] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.607 [INFO][4552] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.622 [INFO][4552] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.655 [INFO][4552] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.664 [INFO][4552] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.673 [INFO][4552] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.673 [INFO][4552] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.680 [INFO][4552] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392 Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.703 [INFO][4552] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.734 [INFO][4552] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.736 [INFO][4552] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" host="localhost" Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.736 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:42.853425 containerd[1472]: 2026-01-28 00:56:42.737 [INFO][4552] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" HandleID="k8s-pod-network.b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.856367 containerd[1472]: 2026-01-28 00:56:42.780 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f958c6dc-2kx8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ec3ab47da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:42.856367 containerd[1472]: 2026-01-28 00:56:42.780 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.856367 containerd[1472]: 2026-01-28 00:56:42.781 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08ec3ab47da ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.856367 containerd[1472]: 2026-01-28 00:56:42.807 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.856367 containerd[1472]: 2026-01-28 00:56:42.808 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392", Pod:"calico-apiserver-8f958c6dc-2kx8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ec3ab47da", MAC:"ce:6b:b3:bd:37:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:42.856367 containerd[1472]: 2026-01-28 00:56:42.843 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392" Namespace="calico-apiserver" Pod="calico-apiserver-8f958c6dc-2kx8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:42.893731 systemd-networkd[1396]: cali8f85635ea5f: Gained IPv6LL Jan 28 00:56:42.908103 containerd[1472]: time="2026-01-28T00:56:42.907402892Z" level=info msg="StopPodSandbox for \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\"" Jan 28 00:56:42.909168 containerd[1472]: time="2026-01-28T00:56:42.908168311Z" level=info msg="StopPodSandbox for \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\"" Jan 28 00:56:42.909168 containerd[1472]: time="2026-01-28T00:56:42.908184958Z" level=info msg="StopPodSandbox for \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\"" Jan 28 00:56:42.924533 systemd-networkd[1396]: cali89dcf433832: Link UP Jan 28 00:56:42.955876 systemd-networkd[1396]: cali89dcf433832: Gained carrier Jan 28 00:56:42.964175 containerd[1472]: time="2026-01-28T00:56:42.963783603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:42.965069 containerd[1472]: time="2026-01-28T00:56:42.964619348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:42.965069 containerd[1472]: time="2026-01-28T00:56:42.964819181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:42.980373 containerd[1472]: time="2026-01-28T00:56:42.977540610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.511 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0 calico-kube-controllers-5ff6b57675- calico-system 0d4e568d-f278-4de9-a835-c39874b224a5 1038 0 2026-01-28 00:56:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5ff6b57675 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5ff6b57675-s9qlm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali89dcf433832 [] [] }} ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.512 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.659 [INFO][4563] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" HandleID="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.659 [INFO][4563] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" HandleID="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036e270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5ff6b57675-s9qlm", "timestamp":"2026-01-28 00:56:42.659376755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.659 [INFO][4563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.737 [INFO][4563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.737 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.774 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.806 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.832 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.848 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.855 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.856 [INFO][4563] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.864 [INFO][4563] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824 Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.884 [INFO][4563] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.901 [INFO][4563] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.902 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" host="localhost" Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.902 [INFO][4563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:43.010580 containerd[1472]: 2026-01-28 00:56:42.902 [INFO][4563] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" HandleID="k8s-pod-network.944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:43.011530 containerd[1472]: 2026-01-28 00:56:42.906 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0", GenerateName:"calico-kube-controllers-5ff6b57675-", Namespace:"calico-system", SelfLink:"", UID:"0d4e568d-f278-4de9-a835-c39874b224a5", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff6b57675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5ff6b57675-s9qlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89dcf433832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:43.011530 containerd[1472]: 2026-01-28 00:56:42.906 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:43.011530 containerd[1472]: 2026-01-28 00:56:42.907 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89dcf433832 ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:43.011530 containerd[1472]: 2026-01-28 00:56:42.959 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:43.011530 containerd[1472]: 2026-01-28 00:56:42.960 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0", GenerateName:"calico-kube-controllers-5ff6b57675-", Namespace:"calico-system", SelfLink:"", UID:"0d4e568d-f278-4de9-a835-c39874b224a5", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff6b57675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824", Pod:"calico-kube-controllers-5ff6b57675-s9qlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89dcf433832", MAC:"52:3f:91:67:c8:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:43.011530 containerd[1472]: 2026-01-28 00:56:42.993 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824" Namespace="calico-system" Pod="calico-kube-controllers-5ff6b57675-s9qlm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:43.087178 systemd[1]: Started cri-containerd-b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392.scope - libcontainer container b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392. Jan 28 00:56:43.167968 containerd[1472]: time="2026-01-28T00:56:43.164433404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:43.167968 containerd[1472]: time="2026-01-28T00:56:43.166192702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:43.167968 containerd[1472]: time="2026-01-28T00:56:43.166222536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:43.184763 containerd[1472]: time="2026-01-28T00:56:43.176323466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:43.275574 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:43.283279 systemd[1]: Started cri-containerd-944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824.scope - libcontainer container 944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824. Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.107 [INFO][4631] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.110 [INFO][4631] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" iface="eth0" netns="/var/run/netns/cni-7ebf1efa-fc77-e6f5-88d5-44fc7f2e7ccb" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.111 [INFO][4631] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" iface="eth0" netns="/var/run/netns/cni-7ebf1efa-fc77-e6f5-88d5-44fc7f2e7ccb" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.111 [INFO][4631] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" iface="eth0" netns="/var/run/netns/cni-7ebf1efa-fc77-e6f5-88d5-44fc7f2e7ccb" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.111 [INFO][4631] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.111 [INFO][4631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.272 [INFO][4697] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.273 [INFO][4697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.273 [INFO][4697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.297 [WARNING][4697] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.298 [INFO][4697] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.309 [INFO][4697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:43.341060 containerd[1472]: 2026-01-28 00:56:43.317 [INFO][4631] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:43.367185 containerd[1472]: time="2026-01-28T00:56:43.366969595Z" level=info msg="TearDown network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\" successfully" Jan 28 00:56:43.367185 containerd[1472]: time="2026-01-28T00:56:43.367015699Z" level=info msg="StopPodSandbox for \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\" returns successfully" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.150 [INFO][4649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.162 [INFO][4649] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" iface="eth0" netns="/var/run/netns/cni-c731ddc4-b021-d63d-cedd-eb1b2b57ac8b" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.164 [INFO][4649] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" iface="eth0" netns="/var/run/netns/cni-c731ddc4-b021-d63d-cedd-eb1b2b57ac8b" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.170 [INFO][4649] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" iface="eth0" netns="/var/run/netns/cni-c731ddc4-b021-d63d-cedd-eb1b2b57ac8b" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.170 [INFO][4649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.170 [INFO][4649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.291 [INFO][4722] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.292 [INFO][4722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.309 [INFO][4722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.346 [WARNING][4722] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.346 [INFO][4722] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.350 [INFO][4722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:43.376667 containerd[1472]: 2026-01-28 00:56:43.361 [INFO][4649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:43.377972 containerd[1472]: time="2026-01-28T00:56:43.377684633Z" level=info msg="TearDown network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\" successfully" Jan 28 00:56:43.377972 containerd[1472]: time="2026-01-28T00:56:43.377727682Z" level=info msg="StopPodSandbox for \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\" returns successfully" Jan 28 00:56:43.377725 systemd[1]: run-netns-cni\x2d7ebf1efa\x2dfc77\x2de6f5\x2d88d5\x2d44fc7f2e7ccb.mount: Deactivated successfully. Jan 28 00:56:43.381718 containerd[1472]: time="2026-01-28T00:56:43.381311862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x77cp,Uid:0417d323-0fbe-457b-a078-73d52ee9f54e,Namespace:calico-system,Attempt:1,}" Jan 28 00:56:43.387555 systemd[1]: run-netns-cni\x2dc731ddc4\x2db021\x2dd63d\x2dcedd\x2deb1b2b57ac8b.mount: Deactivated successfully. Jan 28 00:56:43.387810 containerd[1472]: time="2026-01-28T00:56:43.387584580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5dcgj,Uid:bc0d5231-81be-4bd5-ba52-4066772e339a,Namespace:calico-system,Attempt:1,}" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.182 [INFO][4638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.190 [INFO][4638] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" iface="eth0" netns="/var/run/netns/cni-10f266c7-dcd9-1e90-b860-9b5ea05e6405" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.193 [INFO][4638] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" iface="eth0" netns="/var/run/netns/cni-10f266c7-dcd9-1e90-b860-9b5ea05e6405" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.195 [INFO][4638] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" iface="eth0" netns="/var/run/netns/cni-10f266c7-dcd9-1e90-b860-9b5ea05e6405" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.198 [INFO][4638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.198 [INFO][4638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.348 [INFO][4730] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.350 [INFO][4730] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.350 [INFO][4730] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.357 [WARNING][4730] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.357 [INFO][4730] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.359 [INFO][4730] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:43.392606 containerd[1472]: 2026-01-28 00:56:43.375 [INFO][4638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:43.396005 containerd[1472]: time="2026-01-28T00:56:43.395500672Z" level=info msg="TearDown network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\" successfully" Jan 28 00:56:43.396005 containerd[1472]: time="2026-01-28T00:56:43.395655874Z" level=info msg="StopPodSandbox for \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\" returns successfully" Jan 28 00:56:43.401768 kubelet[2539]: E0128 00:56:43.401455 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:43.403112 containerd[1472]: time="2026-01-28T00:56:43.402756178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z7h5v,Uid:db03c4f9-a045-4c3b-829c-be263339420b,Namespace:kube-system,Attempt:1,}" Jan 28 00:56:43.480943 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:43.569771 containerd[1472]: time="2026-01-28T00:56:43.568275199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f958c6dc-2kx8s,Uid:71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392\"" Jan 28 00:56:43.572210 containerd[1472]: time="2026-01-28T00:56:43.572117670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:56:43.624368 containerd[1472]: time="2026-01-28T00:56:43.624177120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff6b57675-s9qlm,Uid:0d4e568d-f278-4de9-a835-c39874b224a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824\"" Jan 28 00:56:43.654352 containerd[1472]: time="2026-01-28T00:56:43.654138465Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:43.656281 containerd[1472]: time="2026-01-28T00:56:43.655713667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:56:43.656281 containerd[1472]: time="2026-01-28T00:56:43.655801397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:56:43.656386 kubelet[2539]: E0128 00:56:43.656163 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:43.656386 kubelet[2539]: E0128 00:56:43.656346 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:43.657100 kubelet[2539]: E0128 00:56:43.656583 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-2kx8s_calico-apiserver(71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:43.657477 kubelet[2539]: E0128 00:56:43.657310 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:56:43.661395 containerd[1472]: time="2026-01-28T00:56:43.661275031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:56:43.739443 kubelet[2539]: E0128 00:56:43.739264 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:43.742407 kubelet[2539]: E0128 00:56:43.742253 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:56:43.746998 containerd[1472]: time="2026-01-28T00:56:43.746847735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:43.751473 containerd[1472]: time="2026-01-28T00:56:43.751269955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:56:43.751473 containerd[1472]: time="2026-01-28T00:56:43.751309672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:56:43.751982 kubelet[2539]: E0128 00:56:43.751885 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:56:43.752553 kubelet[2539]: E0128 00:56:43.752253 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:56:43.752553 kubelet[2539]: E0128 00:56:43.752420 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5ff6b57675-s9qlm_calico-system(0d4e568d-f278-4de9-a835-c39874b224a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:43.752553 kubelet[2539]: E0128 00:56:43.752501 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:56:43.823043 systemd-networkd[1396]: cali122133eebd1: Link UP Jan 28 00:56:43.830195 systemd-networkd[1396]: cali122133eebd1: Gained carrier Jan 28 00:56:43.850706 kubelet[2539]: I0128 00:56:43.850579 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 00:56:43.852398 kubelet[2539]: E0128 00:56:43.851239 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.596 [INFO][4774] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--x77cp-eth0 goldmane-7c778bb748- calico-system 0417d323-0fbe-457b-a078-73d52ee9f54e 1058 0 2026-01-28 00:56:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-x77cp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali122133eebd1 [] [] }} ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.596 [INFO][4774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.687 [INFO][4841] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" HandleID="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.688 [INFO][4841] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" HandleID="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-x77cp", "timestamp":"2026-01-28 00:56:43.687822375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.689 [INFO][4841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.689 [INFO][4841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.689 [INFO][4841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.703 [INFO][4841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.713 [INFO][4841] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.724 [INFO][4841] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.734 [INFO][4841] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.744 [INFO][4841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.744 [INFO][4841] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.750 [INFO][4841] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108 Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.768 [INFO][4841] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.789 [INFO][4841] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.789 [INFO][4841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" host="localhost" Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.789 [INFO][4841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:43.869286 containerd[1472]: 2026-01-28 00:56:43.789 [INFO][4841] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" HandleID="k8s-pod-network.4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.871260 containerd[1472]: 2026-01-28 00:56:43.805 [INFO][4774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--x77cp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0417d323-0fbe-457b-a078-73d52ee9f54e", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-x77cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali122133eebd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:43.871260 containerd[1472]: 2026-01-28 00:56:43.806 [INFO][4774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.871260 containerd[1472]: 2026-01-28 00:56:43.806 [INFO][4774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali122133eebd1 ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.871260 containerd[1472]: 2026-01-28 00:56:43.829 [INFO][4774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.871260 containerd[1472]: 2026-01-28 00:56:43.832 [INFO][4774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--x77cp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0417d323-0fbe-457b-a078-73d52ee9f54e", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108", Pod:"goldmane-7c778bb748-x77cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali122133eebd1", MAC:"3e:37:43:d7:27:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:43.871260 containerd[1472]: 2026-01-28 00:56:43.850 [INFO][4774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108" Namespace="calico-system" Pod="goldmane-7c778bb748-x77cp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:43.918548 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL Jan 28 00:56:43.962370 systemd-networkd[1396]: calie719ed72f3b: Link UP Jan 28 00:56:43.967858 systemd-networkd[1396]: calie719ed72f3b: Gained carrier Jan 28 00:56:43.994225 containerd[1472]: time="2026-01-28T00:56:43.994054153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:43.994225 containerd[1472]: time="2026-01-28T00:56:43.994112669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:43.994848 containerd[1472]: time="2026-01-28T00:56:43.994123970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:43.994848 containerd[1472]: time="2026-01-28T00:56:43.994223782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.599 [INFO][4789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--z7h5v-eth0 coredns-66bc5c9577- kube-system db03c4f9-a045-4c3b-829c-be263339420b 1060 0 2026-01-28 00:55:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-z7h5v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie719ed72f3b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.599 [INFO][4789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.689 [INFO][4843] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" HandleID="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.689 [INFO][4843] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" HandleID="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043b850), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-z7h5v", "timestamp":"2026-01-28 00:56:43.689162189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.690 [INFO][4843] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.790 [INFO][4843] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.790 [INFO][4843] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.814 [INFO][4843] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.841 [INFO][4843] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.854 [INFO][4843] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.862 [INFO][4843] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.871 [INFO][4843] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.871 [INFO][4843] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.880 [INFO][4843] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.892 [INFO][4843] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.908 [INFO][4843] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.908 [INFO][4843] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" host="localhost" Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.908 [INFO][4843] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:44.029444 containerd[1472]: 2026-01-28 00:56:43.908 [INFO][4843] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" HandleID="k8s-pod-network.99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:44.031492 containerd[1472]: 2026-01-28 00:56:43.941 [INFO][4789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--z7h5v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c4f9-a045-4c3b-829c-be263339420b", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-z7h5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie719ed72f3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:44.031492 containerd[1472]: 2026-01-28 00:56:43.941 [INFO][4789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:44.031492 containerd[1472]: 2026-01-28 00:56:43.941 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie719ed72f3b ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:44.031492 containerd[1472]: 2026-01-28 00:56:43.967 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:44.031492 containerd[1472]: 2026-01-28 00:56:43.974 [INFO][4789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--z7h5v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c4f9-a045-4c3b-829c-be263339420b", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f", Pod:"coredns-66bc5c9577-z7h5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie719ed72f3b", MAC:"e6:74:c9:f9:73:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:44.031492 containerd[1472]: 2026-01-28 00:56:44.005 [INFO][4789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f" Namespace="kube-system" Pod="coredns-66bc5c9577-z7h5v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:44.049363 systemd[1]: Started cri-containerd-4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108.scope - libcontainer container 4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108. Jan 28 00:56:44.100277 systemd-networkd[1396]: calie9c550c5eb5: Link UP Jan 28 00:56:44.103721 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:44.104273 systemd-networkd[1396]: calie9c550c5eb5: Gained carrier Jan 28 00:56:44.152425 containerd[1472]: time="2026-01-28T00:56:44.151636762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:44.152425 containerd[1472]: time="2026-01-28T00:56:44.151749327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:44.152738 containerd[1472]: time="2026-01-28T00:56:44.152423755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:44.153972 containerd[1472]: time="2026-01-28T00:56:44.153065853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.708 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5dcgj-eth0 csi-node-driver- calico-system bc0d5231-81be-4bd5-ba52-4066772e339a 1059 0 2026-01-28 00:56:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5dcgj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie9c550c5eb5 [] [] }} ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.708 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.820 [INFO][4861] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" HandleID="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.828 [INFO][4861] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" HandleID="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5dcgj", "timestamp":"2026-01-28 00:56:43.820807314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.828 [INFO][4861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.914 [INFO][4861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.916 [INFO][4861] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.958 [INFO][4861] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:43.976 [INFO][4861] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.032 [INFO][4861] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.037 [INFO][4861] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.042 [INFO][4861] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.042 [INFO][4861] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.044 [INFO][4861] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4 Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.057 [INFO][4861] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.075 [INFO][4861] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.076 [INFO][4861] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" host="localhost" Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.077 [INFO][4861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:44.188022 containerd[1472]: 2026-01-28 00:56:44.078 [INFO][4861] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" HandleID="k8s-pod-network.6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:44.188974 containerd[1472]: 2026-01-28 00:56:44.091 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5dcgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc0d5231-81be-4bd5-ba52-4066772e339a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5dcgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9c550c5eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:44.188974 containerd[1472]: 2026-01-28 00:56:44.092 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:44.188974 containerd[1472]: 2026-01-28 00:56:44.092 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9c550c5eb5 ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:44.188974 containerd[1472]: 2026-01-28 00:56:44.104 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:44.188974 containerd[1472]: 2026-01-28 00:56:44.107 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5dcgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc0d5231-81be-4bd5-ba52-4066772e339a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4", Pod:"csi-node-driver-5dcgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9c550c5eb5", MAC:"c6:91:a0:c2:ca:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:44.188974 containerd[1472]: 2026-01-28 00:56:44.168 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4" Namespace="calico-system" Pod="csi-node-driver-5dcgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:44.191114 systemd[1]: run-netns-cni\x2d10f266c7\x2ddcd9\x2d1e90\x2db860\x2d9b5ea05e6405.mount: Deactivated successfully. Jan 28 00:56:44.212570 systemd[1]: Started cri-containerd-99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f.scope - libcontainer container 99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f. Jan 28 00:56:44.298378 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:44.301545 systemd-networkd[1396]: cali08ec3ab47da: Gained IPv6LL Jan 28 00:56:44.310476 containerd[1472]: time="2026-01-28T00:56:44.310120766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x77cp,Uid:0417d323-0fbe-457b-a078-73d52ee9f54e,Namespace:calico-system,Attempt:1,} returns sandbox id \"4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108\"" Jan 28 00:56:44.369530 containerd[1472]: time="2026-01-28T00:56:44.366956317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:56:44.382709 containerd[1472]: time="2026-01-28T00:56:44.382138048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:56:44.382709 containerd[1472]: time="2026-01-28T00:56:44.382232160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:56:44.382709 containerd[1472]: time="2026-01-28T00:56:44.382251124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:44.382709 containerd[1472]: time="2026-01-28T00:56:44.382385458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:56:44.439399 containerd[1472]: time="2026-01-28T00:56:44.438992449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z7h5v,Uid:db03c4f9-a045-4c3b-829c-be263339420b,Namespace:kube-system,Attempt:1,} returns sandbox id \"99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f\"" Jan 28 00:56:44.442152 kubelet[2539]: E0128 00:56:44.442050 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:44.454242 systemd[1]: Started cri-containerd-6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4.scope - libcontainer container 6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4. Jan 28 00:56:44.457765 containerd[1472]: time="2026-01-28T00:56:44.457562397Z" level=info msg="CreateContainer within sandbox \"99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:56:44.480018 containerd[1472]: time="2026-01-28T00:56:44.479818238Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:44.481958 containerd[1472]: time="2026-01-28T00:56:44.481732402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:56:44.482034 containerd[1472]: time="2026-01-28T00:56:44.481822272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:56:44.482354 kubelet[2539]: E0128 00:56:44.482213 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:56:44.482354 kubelet[2539]: E0128 00:56:44.482305 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:56:44.482484 kubelet[2539]: E0128 00:56:44.482427 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-x77cp_calico-system(0417d323-0fbe-457b-a078-73d52ee9f54e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:44.482529 kubelet[2539]: E0128 00:56:44.482477 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:56:44.504164 containerd[1472]: time="2026-01-28T00:56:44.504047091Z" level=info msg="CreateContainer within sandbox \"99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3c4aa90e47102a4938dceab7d0a20cee985d036d1b729536ad29aa96190655f\"" Jan 28 00:56:44.508099 containerd[1472]: time="2026-01-28T00:56:44.507979286Z" level=info msg="StartContainer for \"c3c4aa90e47102a4938dceab7d0a20cee985d036d1b729536ad29aa96190655f\"" Jan 28 00:56:44.510408 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:56:44.587543 containerd[1472]: time="2026-01-28T00:56:44.587262034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5dcgj,Uid:bc0d5231-81be-4bd5-ba52-4066772e339a,Namespace:calico-system,Attempt:1,} returns sandbox id \"6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4\"" Jan 28 00:56:44.598525 containerd[1472]: time="2026-01-28T00:56:44.598247737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:56:44.630822 systemd[1]: Started cri-containerd-c3c4aa90e47102a4938dceab7d0a20cee985d036d1b729536ad29aa96190655f.scope - libcontainer container c3c4aa90e47102a4938dceab7d0a20cee985d036d1b729536ad29aa96190655f. Jan 28 00:56:44.704234 containerd[1472]: time="2026-01-28T00:56:44.703834375Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:44.707180 containerd[1472]: time="2026-01-28T00:56:44.707045302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:56:44.707298 containerd[1472]: time="2026-01-28T00:56:44.707211255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:56:44.708250 kubelet[2539]: E0128 00:56:44.708106 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:56:44.708250 kubelet[2539]: E0128 00:56:44.708212 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:56:44.708399 kubelet[2539]: E0128 00:56:44.708319 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:44.713740 containerd[1472]: time="2026-01-28T00:56:44.713474008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:56:44.716351 containerd[1472]: time="2026-01-28T00:56:44.716276339Z" level=info msg="StartContainer for \"c3c4aa90e47102a4938dceab7d0a20cee985d036d1b729536ad29aa96190655f\" returns successfully" Jan 28 00:56:44.756117 kubelet[2539]: E0128 00:56:44.756063 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:56:44.762350 kubelet[2539]: E0128 00:56:44.761783 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:44.765630 kubelet[2539]: E0128 00:56:44.764802 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:56:44.765630 kubelet[2539]: E0128 00:56:44.765525 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:44.765794 kubelet[2539]: E0128 00:56:44.765681 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:44.766309 kubelet[2539]: E0128 00:56:44.766283 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:56:44.813408 systemd-networkd[1396]: cali89dcf433832: Gained IPv6LL Jan 28 00:56:44.816305 containerd[1472]: time="2026-01-28T00:56:44.814335357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:44.816305 containerd[1472]: time="2026-01-28T00:56:44.816059324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:56:44.816305 containerd[1472]: time="2026-01-28T00:56:44.816173984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:56:44.817408 kubelet[2539]: E0128 00:56:44.817263 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:56:44.817490 kubelet[2539]: E0128 00:56:44.817433 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:56:44.817532 kubelet[2539]: E0128 00:56:44.817505 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:44.817695 kubelet[2539]: E0128 00:56:44.817544 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:45.645361 systemd-networkd[1396]: calie9c550c5eb5: Gained IPv6LL Jan 28 00:56:45.709292 systemd-networkd[1396]: cali122133eebd1: Gained IPv6LL Jan 28 00:56:45.782377 kubelet[2539]: E0128 00:56:45.782200 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:45.786459 kubelet[2539]: E0128 00:56:45.785365 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:56:45.786459 kubelet[2539]: E0128 00:56:45.785609 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:56:45.823722 kubelet[2539]: I0128 00:56:45.823268 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-z7h5v" podStartSLOduration=48.823250211 podStartE2EDuration="48.823250211s" podCreationTimestamp="2026-01-28 00:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:56:44.87630611 +0000 UTC m=+53.217119249" watchObservedRunningTime="2026-01-28 00:56:45.823250211 +0000 UTC m=+54.164063348" Jan 28 00:56:45.839370 systemd-networkd[1396]: calie719ed72f3b: Gained IPv6LL Jan 28 00:56:46.784181 kubelet[2539]: E0128 00:56:46.784042 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:56:50.909558 containerd[1472]: time="2026-01-28T00:56:50.909426727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:56:50.974603 containerd[1472]: time="2026-01-28T00:56:50.974482803Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:50.976864 containerd[1472]: time="2026-01-28T00:56:50.976727027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:56:50.977065 containerd[1472]: time="2026-01-28T00:56:50.976856424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:56:50.978488 kubelet[2539]: E0128 00:56:50.978381 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:50.979227 kubelet[2539]: E0128 00:56:50.978495 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:50.979227 kubelet[2539]: E0128 00:56:50.978616 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-zhbk8_calico-apiserver(db3e5b3d-8d48-4187-bdaf-770b7259aaa2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:50.979227 kubelet[2539]: E0128 00:56:50.978662 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:56:51.846855 containerd[1472]: time="2026-01-28T00:56:51.846789935Z" level=info msg="StopPodSandbox for \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\"" Jan 28 00:56:51.910479 containerd[1472]: time="2026-01-28T00:56:51.910220134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.909 [WARNING][5133] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--z7h5v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c4f9-a045-4c3b-829c-be263339420b", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f", Pod:"coredns-66bc5c9577-z7h5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie719ed72f3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.909 [INFO][5133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.909 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" iface="eth0" netns="" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.909 [INFO][5133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.909 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.944 [INFO][5141] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.945 [INFO][5141] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.945 [INFO][5141] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.956 [WARNING][5141] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.956 [INFO][5141] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.960 [INFO][5141] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:51.971153 containerd[1472]: 2026-01-28 00:56:51.966 [INFO][5133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:51.971153 containerd[1472]: time="2026-01-28T00:56:51.971013252Z" level=info msg="TearDown network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\" successfully" Jan 28 00:56:51.971153 containerd[1472]: time="2026-01-28T00:56:51.971052645Z" level=info msg="StopPodSandbox for \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\" returns successfully" Jan 28 00:56:51.973272 containerd[1472]: time="2026-01-28T00:56:51.973206609Z" level=info msg="RemovePodSandbox for \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\"" Jan 28 00:56:51.976438 containerd[1472]: time="2026-01-28T00:56:51.976395817Z" level=info msg="Forcibly stopping sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\"" Jan 28 00:56:52.007610 containerd[1472]: time="2026-01-28T00:56:52.007467939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:52.009296 containerd[1472]: time="2026-01-28T00:56:52.009163713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:56:52.009551 containerd[1472]: time="2026-01-28T00:56:52.009491100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:56:52.009852 kubelet[2539]: E0128 00:56:52.009769 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:56:52.009852 kubelet[2539]: E0128 00:56:52.009843 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:56:52.011204 kubelet[2539]: E0128 00:56:52.010005 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:52.013952 containerd[1472]: time="2026-01-28T00:56:52.013500003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.045 [WARNING][5160] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--z7h5v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c4f9-a045-4c3b-829c-be263339420b", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99462b693b9fd9ae6e54fa12f510f7e404f685e7ce28235eca127305971bac3f", Pod:"coredns-66bc5c9577-z7h5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie719ed72f3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.046 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.046 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" iface="eth0" netns="" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.046 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.046 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.085 [INFO][5169] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.085 [INFO][5169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.085 [INFO][5169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.098 [WARNING][5169] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.098 [INFO][5169] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" HandleID="k8s-pod-network.a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Workload="localhost-k8s-coredns--66bc5c9577--z7h5v-eth0" Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.101 [INFO][5169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:52.110158 containerd[1472]: 2026-01-28 00:56:52.104 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80" Jan 28 00:56:52.110158 containerd[1472]: time="2026-01-28T00:56:52.107750055Z" level=info msg="TearDown network for sandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\" successfully" Jan 28 00:56:52.115154 containerd[1472]: time="2026-01-28T00:56:52.115043915Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:52.117409 containerd[1472]: time="2026-01-28T00:56:52.117288377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:52.117409 containerd[1472]: time="2026-01-28T00:56:52.117406345Z" level=info msg="RemovePodSandbox \"a2f4f0d0d4d6a8b62851f4f8f2049837a9a9d8a74f63b7d94c5c08f5efe1ec80\" returns successfully" Jan 28 00:56:52.118454 containerd[1472]: time="2026-01-28T00:56:52.118368577Z" level=info msg="StopPodSandbox for \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\"" Jan 28 00:56:52.119423 containerd[1472]: time="2026-01-28T00:56:52.119148786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:56:52.119423 containerd[1472]: time="2026-01-28T00:56:52.119233241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:56:52.120453 kubelet[2539]: E0128 00:56:52.119778 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:56:52.120453 kubelet[2539]: E0128 00:56:52.119850 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:56:52.120453 kubelet[2539]: E0128 00:56:52.120146 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:52.120585 kubelet[2539]: E0128 00:56:52.120213 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.177 [WARNING][5186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bpv6m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a", Pod:"coredns-66bc5c9577-bpv6m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f85635ea5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.177 [INFO][5186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.177 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" iface="eth0" netns="" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.177 [INFO][5186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.177 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.209 [INFO][5194] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.209 [INFO][5194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.209 [INFO][5194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.220 [WARNING][5194] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.220 [INFO][5194] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.224 [INFO][5194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:52.232225 containerd[1472]: 2026-01-28 00:56:52.228 [INFO][5186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.232225 containerd[1472]: time="2026-01-28T00:56:52.232190176Z" level=info msg="TearDown network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\" successfully" Jan 28 00:56:52.232225 containerd[1472]: time="2026-01-28T00:56:52.232229990Z" level=info msg="StopPodSandbox for \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\" returns successfully" Jan 28 00:56:52.233112 containerd[1472]: time="2026-01-28T00:56:52.233015937Z" level=info msg="RemovePodSandbox for \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\"" Jan 28 00:56:52.233112 containerd[1472]: time="2026-01-28T00:56:52.233050711Z" level=info msg="Forcibly stopping sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\"" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.298 [WARNING][5211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bpv6m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"52ba832a-f5ed-4839-b1f4-f7bf87e5f87b", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 55, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c987aa480e4c63c580b4d266c5b049219c75dd26734477fef1011a1dd0a811a", Pod:"coredns-66bc5c9577-bpv6m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f85635ea5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.298 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.299 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" iface="eth0" netns="" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.299 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.299 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.333 [INFO][5219] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.333 [INFO][5219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.333 [INFO][5219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.342 [WARNING][5219] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.342 [INFO][5219] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" HandleID="k8s-pod-network.9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Workload="localhost-k8s-coredns--66bc5c9577--bpv6m-eth0" Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.346 [INFO][5219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:52.352704 containerd[1472]: 2026-01-28 00:56:52.349 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27" Jan 28 00:56:52.353476 containerd[1472]: time="2026-01-28T00:56:52.352754604Z" level=info msg="TearDown network for sandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\" successfully" Jan 28 00:56:52.367412 containerd[1472]: time="2026-01-28T00:56:52.367225688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:52.367539 containerd[1472]: time="2026-01-28T00:56:52.367410859Z" level=info msg="RemovePodSandbox \"9e1d56bf26be63683cd8bbeeab06cae7fb6b1ebba9d13199f0d70d8546167f27\" returns successfully" Jan 28 00:56:52.368988 containerd[1472]: time="2026-01-28T00:56:52.368873063Z" level=info msg="StopPodSandbox for \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\"" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.434 [WARNING][5238] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--x77cp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0417d323-0fbe-457b-a078-73d52ee9f54e", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108", Pod:"goldmane-7c778bb748-x77cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali122133eebd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.435 [INFO][5238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.435 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" iface="eth0" netns="" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.435 [INFO][5238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.435 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.471 [INFO][5247] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.471 [INFO][5247] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.471 [INFO][5247] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.482 [WARNING][5247] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.482 [INFO][5247] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.485 [INFO][5247] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:52.494151 containerd[1472]: 2026-01-28 00:56:52.490 [INFO][5238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.494151 containerd[1472]: time="2026-01-28T00:56:52.494131379Z" level=info msg="TearDown network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\" successfully" Jan 28 00:56:52.494151 containerd[1472]: time="2026-01-28T00:56:52.494166493Z" level=info msg="StopPodSandbox for \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\" returns successfully" Jan 28 00:56:52.495055 containerd[1472]: time="2026-01-28T00:56:52.494996243Z" level=info msg="RemovePodSandbox for \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\"" Jan 28 00:56:52.495055 containerd[1472]: time="2026-01-28T00:56:52.495033442Z" level=info msg="Forcibly stopping sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\"" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.577 [WARNING][5264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--x77cp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0417d323-0fbe-457b-a078-73d52ee9f54e", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a2f67e52aa9cd503c220e2c0c16b28541309b6fc8d36c57085bb2399df64108", Pod:"goldmane-7c778bb748-x77cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali122133eebd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.577 [INFO][5264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.578 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" iface="eth0" netns="" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.578 [INFO][5264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.578 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.633 [INFO][5274] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.634 [INFO][5274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.634 [INFO][5274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.644 [WARNING][5274] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.644 [INFO][5274] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" HandleID="k8s-pod-network.1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Workload="localhost-k8s-goldmane--7c778bb748--x77cp-eth0" Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.648 [INFO][5274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:52.656031 containerd[1472]: 2026-01-28 00:56:52.652 [INFO][5264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99" Jan 28 00:56:52.656031 containerd[1472]: time="2026-01-28T00:56:52.655798241Z" level=info msg="TearDown network for sandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\" successfully" Jan 28 00:56:52.663960 containerd[1472]: time="2026-01-28T00:56:52.663610013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:52.663960 containerd[1472]: time="2026-01-28T00:56:52.663706000Z" level=info msg="RemovePodSandbox \"1954728d0c2938fc8c45c35ea0574ba4af6fedb4142477154f4c3f440bb76c99\" returns successfully" Jan 28 00:56:52.666150 containerd[1472]: time="2026-01-28T00:56:52.665547175Z" level=info msg="StopPodSandbox for \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\"" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.741 [WARNING][5292] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" WorkloadEndpoint="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.742 [INFO][5292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.742 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" iface="eth0" netns="" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.742 [INFO][5292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.742 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.802 [INFO][5301] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.803 [INFO][5301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.803 [INFO][5301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.819 [WARNING][5301] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.819 [INFO][5301] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.835 [INFO][5301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:52.862363 containerd[1472]: 2026-01-28 00:56:52.853 [INFO][5292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:52.863694 containerd[1472]: time="2026-01-28T00:56:52.863028978Z" level=info msg="TearDown network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\" successfully" Jan 28 00:56:52.863694 containerd[1472]: time="2026-01-28T00:56:52.863074971Z" level=info msg="StopPodSandbox for \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\" returns successfully" Jan 28 00:56:52.867973 containerd[1472]: time="2026-01-28T00:56:52.867120768Z" level=info msg="RemovePodSandbox for \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\"" Jan 28 00:56:52.867973 containerd[1472]: time="2026-01-28T00:56:52.867458150Z" level=info msg="Forcibly stopping sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\"" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:52.944 [WARNING][5319] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" WorkloadEndpoint="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:52.945 [INFO][5319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:52.945 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" iface="eth0" netns="" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:52.945 [INFO][5319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:52.945 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:53.005 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:53.005 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:53.005 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:53.017 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:53.017 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" HandleID="k8s-pod-network.0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Workload="localhost-k8s-whisker--8c998d6db--8qxtw-eth0" Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:53.020 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:53.031565 containerd[1472]: 2026-01-28 00:56:53.024 [INFO][5319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43" Jan 28 00:56:53.031565 containerd[1472]: time="2026-01-28T00:56:53.030865037Z" level=info msg="TearDown network for sandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\" successfully" Jan 28 00:56:53.047961 containerd[1472]: time="2026-01-28T00:56:53.047666068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:53.047961 containerd[1472]: time="2026-01-28T00:56:53.047746406Z" level=info msg="RemovePodSandbox \"0346c23a9b846cfa509f0ace0e01c43e03ce3d4256b970b49d9ada838329ec43\" returns successfully" Jan 28 00:56:53.048670 containerd[1472]: time="2026-01-28T00:56:53.048579658Z" level=info msg="StopPodSandbox for \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\"" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.111 [WARNING][5346] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0", GenerateName:"calico-kube-controllers-5ff6b57675-", Namespace:"calico-system", SelfLink:"", UID:"0d4e568d-f278-4de9-a835-c39874b224a5", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff6b57675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824", Pod:"calico-kube-controllers-5ff6b57675-s9qlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89dcf433832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.112 [INFO][5346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.112 [INFO][5346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" iface="eth0" netns="" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.112 [INFO][5346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.112 [INFO][5346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.151 [INFO][5354] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.151 [INFO][5354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.151 [INFO][5354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.157 [WARNING][5354] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.157 [INFO][5354] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.160 [INFO][5354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:53.173020 containerd[1472]: 2026-01-28 00:56:53.166 [INFO][5346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.173020 containerd[1472]: time="2026-01-28T00:56:53.173007628Z" level=info msg="TearDown network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\" successfully" Jan 28 00:56:53.173020 containerd[1472]: time="2026-01-28T00:56:53.173035731Z" level=info msg="StopPodSandbox for \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\" returns successfully" Jan 28 00:56:53.174239 containerd[1472]: time="2026-01-28T00:56:53.174181753Z" level=info msg="RemovePodSandbox for \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\"" Jan 28 00:56:53.174363 containerd[1472]: time="2026-01-28T00:56:53.174255628Z" level=info msg="Forcibly stopping sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\"" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.241 [WARNING][5372] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0", GenerateName:"calico-kube-controllers-5ff6b57675-", Namespace:"calico-system", SelfLink:"", UID:"0d4e568d-f278-4de9-a835-c39874b224a5", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff6b57675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"944da0be54c9c6a7e2d92d5ff4acae4465364f4b35addff0305484617af76824", Pod:"calico-kube-controllers-5ff6b57675-s9qlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89dcf433832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.242 [INFO][5372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.242 [INFO][5372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" iface="eth0" netns="" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.242 [INFO][5372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.242 [INFO][5372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.274 [INFO][5380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.274 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.275 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.285 [WARNING][5380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.285 [INFO][5380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" HandleID="k8s-pod-network.e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Workload="localhost-k8s-calico--kube--controllers--5ff6b57675--s9qlm-eth0" Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.287 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:53.295110 containerd[1472]: 2026-01-28 00:56:53.291 [INFO][5372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c" Jan 28 00:56:53.295110 containerd[1472]: time="2026-01-28T00:56:53.295032689Z" level=info msg="TearDown network for sandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\" successfully" Jan 28 00:56:53.301235 containerd[1472]: time="2026-01-28T00:56:53.301089132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:53.301235 containerd[1472]: time="2026-01-28T00:56:53.301188394Z" level=info msg="RemovePodSandbox \"e6904fd19ace567635e44168b477a856115f12c4ecf0993b7cba62a9878fd49c\" returns successfully" Jan 28 00:56:53.304278 containerd[1472]: time="2026-01-28T00:56:53.304124799Z" level=info msg="StopPodSandbox for \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\"" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.359 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5dcgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc0d5231-81be-4bd5-ba52-4066772e339a", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4", Pod:"csi-node-driver-5dcgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9c550c5eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.361 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.361 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" iface="eth0" netns="" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.361 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.361 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.414 [INFO][5406] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.416 [INFO][5406] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.417 [INFO][5406] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.431 [WARNING][5406] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.432 [INFO][5406] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.436 [INFO][5406] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:53.451974 containerd[1472]: 2026-01-28 00:56:53.443 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.451974 containerd[1472]: time="2026-01-28T00:56:53.450135534Z" level=info msg="TearDown network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\" successfully" Jan 28 00:56:53.451974 containerd[1472]: time="2026-01-28T00:56:53.450248912Z" level=info msg="StopPodSandbox for \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\" returns successfully" Jan 28 00:56:53.451974 containerd[1472]: time="2026-01-28T00:56:53.451526274Z" level=info msg="RemovePodSandbox for \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\"" Jan 28 00:56:53.451974 containerd[1472]: time="2026-01-28T00:56:53.451565766Z" level=info msg="Forcibly stopping sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\"" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.542 [WARNING][5428] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5dcgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc0d5231-81be-4bd5-ba52-4066772e339a", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cb9ded54cceb4abf25c72db3b7c1f1261be2e9ace12c953a3f3b17961b488e4", Pod:"csi-node-driver-5dcgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9c550c5eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.542 [INFO][5428] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.542 [INFO][5428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" iface="eth0" netns="" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.542 [INFO][5428] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.542 [INFO][5428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.648 [INFO][5438] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.649 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.649 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.659 [WARNING][5438] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.659 [INFO][5438] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" HandleID="k8s-pod-network.cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Workload="localhost-k8s-csi--node--driver--5dcgj-eth0" Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.664 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:53.671753 containerd[1472]: 2026-01-28 00:56:53.667 [INFO][5428] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d" Jan 28 00:56:53.672825 containerd[1472]: time="2026-01-28T00:56:53.671772738Z" level=info msg="TearDown network for sandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\" successfully" Jan 28 00:56:53.686398 containerd[1472]: time="2026-01-28T00:56:53.686162971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:53.686398 containerd[1472]: time="2026-01-28T00:56:53.686262124Z" level=info msg="RemovePodSandbox \"cc03f9caf1472a8e36d22dc309094f975ab8c53776d622849404879740a6227d\" returns successfully" Jan 28 00:56:53.687857 containerd[1472]: time="2026-01-28T00:56:53.687763090Z" level=info msg="StopPodSandbox for \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\"" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.768 [WARNING][5455] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3e5b3d-8d48-4187-bdaf-770b7259aaa2", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5", Pod:"calico-apiserver-8f958c6dc-zhbk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieac8356445a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.768 [INFO][5455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.768 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" iface="eth0" netns="" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.768 [INFO][5455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.768 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.819 [INFO][5463] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.819 [INFO][5463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.820 [INFO][5463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.845 [WARNING][5463] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.846 [INFO][5463] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.862 [INFO][5463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:53.872175 containerd[1472]: 2026-01-28 00:56:53.866 [INFO][5455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:53.872175 containerd[1472]: time="2026-01-28T00:56:53.872194904Z" level=info msg="TearDown network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\" successfully" Jan 28 00:56:53.872175 containerd[1472]: time="2026-01-28T00:56:53.872221583Z" level=info msg="StopPodSandbox for \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\" returns successfully" Jan 28 00:56:53.873093 containerd[1472]: time="2026-01-28T00:56:53.872785104Z" level=info msg="RemovePodSandbox for \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\"" Jan 28 00:56:53.873093 containerd[1472]: time="2026-01-28T00:56:53.872812924Z" level=info msg="Forcibly stopping sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\"" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:53.957 [WARNING][5482] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3e5b3d-8d48-4187-bdaf-770b7259aaa2", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38295fb344dd9c506da610a98808b998aebc58a494b738115e1ebf614b4449f5", Pod:"calico-apiserver-8f958c6dc-zhbk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieac8356445a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:53.958 [INFO][5482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:53.958 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" iface="eth0" netns="" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:53.958 [INFO][5482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:53.958 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:53.999 [INFO][5491] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:54.000 [INFO][5491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:54.000 [INFO][5491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:54.012 [WARNING][5491] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:54.012 [INFO][5491] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" HandleID="k8s-pod-network.8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Workload="localhost-k8s-calico--apiserver--8f958c6dc--zhbk8-eth0" Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:54.015 [INFO][5491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:54.022623 containerd[1472]: 2026-01-28 00:56:54.018 [INFO][5482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a" Jan 28 00:56:54.022623 containerd[1472]: time="2026-01-28T00:56:54.022264502Z" level=info msg="TearDown network for sandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\" successfully" Jan 28 00:56:54.042304 containerd[1472]: time="2026-01-28T00:56:54.042112660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:54.042304 containerd[1472]: time="2026-01-28T00:56:54.042226381Z" level=info msg="RemovePodSandbox \"8a1e801840db1b3aa8640f8161721bf5bd062f0dfd5011d7833726d02338803a\" returns successfully" Jan 28 00:56:54.044207 containerd[1472]: time="2026-01-28T00:56:54.043111153Z" level=info msg="StopPodSandbox for \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\"" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.119 [WARNING][5508] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392", Pod:"calico-apiserver-8f958c6dc-2kx8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ec3ab47da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.119 [INFO][5508] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.119 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" iface="eth0" netns="" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.119 [INFO][5508] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.119 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.187 [INFO][5517] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.188 [INFO][5517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.188 [INFO][5517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.200 [WARNING][5517] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.200 [INFO][5517] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.204 [INFO][5517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:54.212622 containerd[1472]: 2026-01-28 00:56:54.208 [INFO][5508] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.212622 containerd[1472]: time="2026-01-28T00:56:54.212304959Z" level=info msg="TearDown network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\" successfully" Jan 28 00:56:54.212622 containerd[1472]: time="2026-01-28T00:56:54.212350794Z" level=info msg="StopPodSandbox for \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\" returns successfully" Jan 28 00:56:54.215862 containerd[1472]: time="2026-01-28T00:56:54.213615299Z" level=info msg="RemovePodSandbox for \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\"" Jan 28 00:56:54.215862 containerd[1472]: time="2026-01-28T00:56:54.213655474Z" level=info msg="Forcibly stopping sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\"" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.307 [WARNING][5535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0", GenerateName:"calico-apiserver-8f958c6dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f958c6dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3e779c80c094b31f108934660aaae2c2ad4f294912f861a521407f2263ca392", Pod:"calico-apiserver-8f958c6dc-2kx8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ec3ab47da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.307 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.307 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" iface="eth0" netns="" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.308 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.308 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.362 [INFO][5543] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.362 [INFO][5543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.362 [INFO][5543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.372 [WARNING][5543] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.372 [INFO][5543] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" HandleID="k8s-pod-network.7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Workload="localhost-k8s-calico--apiserver--8f958c6dc--2kx8s-eth0" Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.376 [INFO][5543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:56:54.383148 containerd[1472]: 2026-01-28 00:56:54.379 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d" Jan 28 00:56:54.383817 containerd[1472]: time="2026-01-28T00:56:54.383203987Z" level=info msg="TearDown network for sandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\" successfully" Jan 28 00:56:54.388197 containerd[1472]: time="2026-01-28T00:56:54.387990876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:56:54.388197 containerd[1472]: time="2026-01-28T00:56:54.388069121Z" level=info msg="RemovePodSandbox \"7b4ea537b643868e997f2cfe96cd739dc67a88594998c5760ff36f18eb32d99d\" returns successfully" Jan 28 00:56:55.906228 containerd[1472]: time="2026-01-28T00:56:55.906171132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:56:55.988304 containerd[1472]: time="2026-01-28T00:56:55.988163259Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:55.990844 containerd[1472]: time="2026-01-28T00:56:55.990599114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:56:55.990844 containerd[1472]: time="2026-01-28T00:56:55.990789625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:56:55.991356 kubelet[2539]: E0128 00:56:55.991211 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:55.991356 kubelet[2539]: E0128 00:56:55.991349 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:56:55.992184 kubelet[2539]: E0128 00:56:55.991494 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-2kx8s_calico-apiserver(71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:55.992184 kubelet[2539]: E0128 00:56:55.991534 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:56:56.909329 containerd[1472]: time="2026-01-28T00:56:56.907867939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:56:57.010731 containerd[1472]: time="2026-01-28T00:56:57.010496440Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:57.013034 containerd[1472]: time="2026-01-28T00:56:57.012720605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:56:57.013034 containerd[1472]: time="2026-01-28T00:56:57.012855866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:56:57.013319 kubelet[2539]: E0128 00:56:57.013193 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:56:57.013319 kubelet[2539]: E0128 00:56:57.013306 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:56:57.014003 kubelet[2539]: E0128 00:56:57.013551 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-x77cp_calico-system(0417d323-0fbe-457b-a078-73d52ee9f54e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:57.014003 kubelet[2539]: E0128 00:56:57.013597 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:56:57.014445 containerd[1472]: time="2026-01-28T00:56:57.014413771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:56:57.087945 containerd[1472]: time="2026-01-28T00:56:57.087770789Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:57.091302 containerd[1472]: time="2026-01-28T00:56:57.091074981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:56:57.091302 containerd[1472]: time="2026-01-28T00:56:57.091286112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:56:57.091554 kubelet[2539]: E0128 00:56:57.091495 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:56:57.091616 kubelet[2539]: E0128 00:56:57.091568 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:56:57.091951 kubelet[2539]: E0128 00:56:57.091809 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5ff6b57675-s9qlm_calico-system(0d4e568d-f278-4de9-a835-c39874b224a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:57.092133 kubelet[2539]: E0128 00:56:57.091964 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:56:57.093603 containerd[1472]: time="2026-01-28T00:56:57.092774391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:56:57.175756 containerd[1472]: time="2026-01-28T00:56:57.175373502Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:57.181661 containerd[1472]: time="2026-01-28T00:56:57.180419917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:56:57.181661 containerd[1472]: time="2026-01-28T00:56:57.180540430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:56:57.181812 kubelet[2539]: E0128 00:56:57.180823 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:56:57.181812 kubelet[2539]: E0128 00:56:57.181015 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:56:57.181812 kubelet[2539]: E0128 00:56:57.181167 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:57.182982 containerd[1472]: time="2026-01-28T00:56:57.182854792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:56:57.255812 containerd[1472]: time="2026-01-28T00:56:57.255712753Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:56:57.258989 containerd[1472]: time="2026-01-28T00:56:57.257862026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:56:57.258989 containerd[1472]: time="2026-01-28T00:56:57.258041559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:56:57.259102 kubelet[2539]: E0128 00:56:57.258477 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:56:57.259102 kubelet[2539]: E0128 00:56:57.258551 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:56:57.259102 kubelet[2539]: E0128 00:56:57.258662 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:56:57.259386 kubelet[2539]: E0128 00:56:57.258733 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:57:01.906487 kubelet[2539]: E0128 00:57:01.906346 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:57:03.917160 kubelet[2539]: E0128 00:57:03.916842 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:57:07.911320 kubelet[2539]: E0128 00:57:07.911152 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:57:08.906807 kubelet[2539]: E0128 00:57:08.906710 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:57:11.929852 kubelet[2539]: E0128 00:57:11.929638 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:57:12.904761 kubelet[2539]: E0128 00:57:12.904624 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:12.905716 kubelet[2539]: E0128 00:57:12.905243 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:12.910536 kubelet[2539]: E0128 00:57:12.910379 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:57:14.904665 kubelet[2539]: E0128 00:57:14.904331 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:15.906969 containerd[1472]: time="2026-01-28T00:57:15.906837446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:57:15.978776 containerd[1472]: time="2026-01-28T00:57:15.978626904Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:15.980472 containerd[1472]: time="2026-01-28T00:57:15.980336242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:57:15.980999 containerd[1472]: time="2026-01-28T00:57:15.980562928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:57:15.981603 kubelet[2539]: E0128 00:57:15.980887 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:57:15.981603 kubelet[2539]: E0128 00:57:15.981165 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:57:15.981603 kubelet[2539]: E0128 00:57:15.981287 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-zhbk8_calico-apiserver(db3e5b3d-8d48-4187-bdaf-770b7259aaa2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:15.981603 kubelet[2539]: E0128 00:57:15.981376 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:57:17.667502 systemd[1]: Started sshd@7-10.0.0.11:22-10.0.0.1:55878.service - OpenSSH per-connection server daemon (10.0.0.1:55878). Jan 28 00:57:17.970268 containerd[1472]: time="2026-01-28T00:57:17.969830403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:57:18.050028 containerd[1472]: time="2026-01-28T00:57:18.049834472Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:18.054838 containerd[1472]: time="2026-01-28T00:57:18.053115088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:57:18.054838 containerd[1472]: time="2026-01-28T00:57:18.053240935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:57:18.055082 kubelet[2539]: E0128 00:57:18.054128 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:57:18.055082 kubelet[2539]: E0128 00:57:18.054184 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:57:18.055082 kubelet[2539]: E0128 00:57:18.054268 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:18.058257 containerd[1472]: time="2026-01-28T00:57:18.057481029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:57:18.061532 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 55878 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:18.064314 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:18.076615 systemd-logind[1461]: New session 8 of user core. Jan 28 00:57:18.086155 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:57:18.143598 containerd[1472]: time="2026-01-28T00:57:18.143496092Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:18.150755 containerd[1472]: time="2026-01-28T00:57:18.150542586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:57:18.151149 containerd[1472]: time="2026-01-28T00:57:18.150584098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:57:18.153017 kubelet[2539]: E0128 00:57:18.151540 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:57:18.153017 kubelet[2539]: E0128 00:57:18.151611 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:57:18.153017 kubelet[2539]: E0128 00:57:18.151774 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:18.153220 kubelet[2539]: E0128 00:57:18.151829 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:57:18.673795 sshd[5590]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:18.680825 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:57:18.684734 systemd[1]: sshd@7-10.0.0.11:22-10.0.0.1:55878.service: Deactivated successfully. Jan 28 00:57:18.696127 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:57:18.703193 systemd-logind[1461]: Removed session 8. Jan 28 00:57:19.909585 containerd[1472]: time="2026-01-28T00:57:19.909173070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:57:20.000218 containerd[1472]: time="2026-01-28T00:57:19.999765102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:20.005176 containerd[1472]: time="2026-01-28T00:57:20.005035490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:57:20.005298 containerd[1472]: time="2026-01-28T00:57:20.005168381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:57:20.006620 kubelet[2539]: E0128 00:57:20.005356 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:57:20.006620 kubelet[2539]: E0128 00:57:20.006582 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:57:20.007719 kubelet[2539]: E0128 00:57:20.006700 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5ff6b57675-s9qlm_calico-system(0d4e568d-f278-4de9-a835-c39874b224a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:20.007719 kubelet[2539]: E0128 00:57:20.006750 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:57:20.904389 kubelet[2539]: E0128 00:57:20.904145 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:21.907663 containerd[1472]: time="2026-01-28T00:57:21.907215812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:57:22.013671 containerd[1472]: time="2026-01-28T00:57:22.013609519Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:22.017104 containerd[1472]: time="2026-01-28T00:57:22.016277242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:57:22.017104 containerd[1472]: time="2026-01-28T00:57:22.016397630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:57:22.017455 kubelet[2539]: E0128 00:57:22.017253 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:57:22.017455 kubelet[2539]: E0128 00:57:22.017311 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:57:22.018132 kubelet[2539]: E0128 00:57:22.017415 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-2kx8s_calico-apiserver(71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:22.018132 kubelet[2539]: E0128 00:57:22.017514 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:57:22.909104 containerd[1472]: time="2026-01-28T00:57:22.909052360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:57:22.984691 containerd[1472]: time="2026-01-28T00:57:22.984603065Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:22.986603 containerd[1472]: time="2026-01-28T00:57:22.986395900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:57:22.986796 containerd[1472]: time="2026-01-28T00:57:22.986432775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:57:22.987111 kubelet[2539]: E0128 00:57:22.986982 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:57:22.987193 kubelet[2539]: E0128 00:57:22.987112 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:57:22.987232 kubelet[2539]: E0128 00:57:22.987213 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-x77cp_calico-system(0417d323-0fbe-457b-a078-73d52ee9f54e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:22.987316 kubelet[2539]: E0128 00:57:22.987262 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:57:23.677072 systemd[1]: Started sshd@8-10.0.0.11:22-10.0.0.1:59818.service - OpenSSH per-connection server daemon (10.0.0.1:59818). Jan 28 00:57:23.760314 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 59818 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:23.763113 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:23.772735 systemd-logind[1461]: New session 9 of user core. Jan 28 00:57:23.780302 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:57:24.016791 sshd[5617]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:24.023757 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:57:24.032458 systemd[1]: sshd@8-10.0.0.11:22-10.0.0.1:59818.service: Deactivated successfully. Jan 28 00:57:24.054258 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:57:24.059171 systemd-logind[1461]: Removed session 9. Jan 28 00:57:24.911149 containerd[1472]: time="2026-01-28T00:57:24.911069207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:57:24.978394 containerd[1472]: time="2026-01-28T00:57:24.978293274Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:24.980374 containerd[1472]: time="2026-01-28T00:57:24.980283382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:57:24.980464 containerd[1472]: time="2026-01-28T00:57:24.980383832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:57:24.980735 kubelet[2539]: E0128 00:57:24.980628 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:57:24.980735 kubelet[2539]: E0128 00:57:24.980709 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:57:24.981385 kubelet[2539]: E0128 00:57:24.980787 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:24.982384 containerd[1472]: time="2026-01-28T00:57:24.982300609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:57:25.059422 containerd[1472]: time="2026-01-28T00:57:25.059144745Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:57:25.063806 containerd[1472]: time="2026-01-28T00:57:25.063601925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:57:25.064049 containerd[1472]: time="2026-01-28T00:57:25.063815899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:57:25.064630 kubelet[2539]: E0128 00:57:25.064547 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:57:25.064630 kubelet[2539]: E0128 00:57:25.064609 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:57:25.064761 kubelet[2539]: E0128 00:57:25.064700 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:57:25.064886 kubelet[2539]: E0128 00:57:25.064759 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:57:26.905379 kubelet[2539]: E0128 00:57:26.905273 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:57:29.047318 systemd[1]: Started sshd@9-10.0.0.11:22-10.0.0.1:59828.service - OpenSSH per-connection server daemon (10.0.0.1:59828). Jan 28 00:57:29.092341 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 59828 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:29.094552 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:29.107395 systemd-logind[1461]: New session 10 of user core. Jan 28 00:57:29.114709 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:57:29.279150 sshd[5637]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:29.283481 systemd[1]: sshd@9-10.0.0.11:22-10.0.0.1:59828.service: Deactivated successfully. Jan 28 00:57:29.285961 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:57:29.286765 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:57:29.288298 systemd-logind[1461]: Removed session 10. Jan 28 00:57:30.906838 kubelet[2539]: E0128 00:57:30.906642 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:57:32.906399 kubelet[2539]: E0128 00:57:32.906274 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:57:34.305385 systemd[1]: Started sshd@10-10.0.0.11:22-10.0.0.1:33306.service - OpenSSH per-connection server daemon (10.0.0.1:33306). Jan 28 00:57:34.377144 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 33306 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:34.379886 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:34.390730 systemd-logind[1461]: New session 11 of user core. Jan 28 00:57:34.397159 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:57:34.589112 sshd[5652]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:34.599518 systemd[1]: sshd@10-10.0.0.11:22-10.0.0.1:33306.service: Deactivated successfully. Jan 28 00:57:34.604170 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 00:57:34.607076 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Jan 28 00:57:34.611607 systemd-logind[1461]: Removed session 11. Jan 28 00:57:34.905959 kubelet[2539]: E0128 00:57:34.905690 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:57:34.907011 kubelet[2539]: E0128 00:57:34.906963 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:57:38.905561 kubelet[2539]: E0128 00:57:38.905459 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:57:38.907431 kubelet[2539]: E0128 00:57:38.907349 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:57:39.616344 systemd[1]: Started sshd@11-10.0.0.11:22-10.0.0.1:33310.service - OpenSSH per-connection server daemon (10.0.0.1:33310). Jan 28 00:57:39.657943 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 33310 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:39.661422 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:39.666826 systemd-logind[1461]: New session 12 of user core. Jan 28 00:57:39.672207 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 00:57:39.823259 sshd[5668]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:39.833416 systemd[1]: sshd@11-10.0.0.11:22-10.0.0.1:33310.service: Deactivated successfully. Jan 28 00:57:39.836751 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 00:57:39.862593 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Jan 28 00:57:39.868425 systemd-logind[1461]: Removed session 12. Jan 28 00:57:44.846564 systemd[1]: Started sshd@12-10.0.0.11:22-10.0.0.1:47816.service - OpenSSH per-connection server daemon (10.0.0.1:47816). Jan 28 00:57:44.896635 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 47816 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:44.901019 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:44.911154 kubelet[2539]: E0128 00:57:44.911071 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:57:44.913347 systemd-logind[1461]: New session 13 of user core. Jan 28 00:57:44.920504 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 00:57:45.102621 sshd[5707]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:45.112510 systemd[1]: sshd@12-10.0.0.11:22-10.0.0.1:47816.service: Deactivated successfully. Jan 28 00:57:45.115168 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 00:57:45.116969 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Jan 28 00:57:45.128764 systemd[1]: Started sshd@13-10.0.0.11:22-10.0.0.1:47822.service - OpenSSH per-connection server daemon (10.0.0.1:47822). Jan 28 00:57:45.132055 systemd-logind[1461]: Removed session 13. Jan 28 00:57:45.171498 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 47822 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:45.174407 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:45.182629 systemd-logind[1461]: New session 14 of user core. Jan 28 00:57:45.191174 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 00:57:45.504369 sshd[5723]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:45.518657 systemd[1]: sshd@13-10.0.0.11:22-10.0.0.1:47822.service: Deactivated successfully. Jan 28 00:57:45.522818 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 00:57:45.527729 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Jan 28 00:57:45.541521 systemd[1]: Started sshd@14-10.0.0.11:22-10.0.0.1:47838.service - OpenSSH per-connection server daemon (10.0.0.1:47838). Jan 28 00:57:45.543777 systemd-logind[1461]: Removed session 14. Jan 28 00:57:45.580953 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 47838 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:45.583875 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:45.593759 systemd-logind[1461]: New session 15 of user core. Jan 28 00:57:45.602190 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 00:57:45.780078 sshd[5736]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:45.786479 systemd[1]: sshd@14-10.0.0.11:22-10.0.0.1:47838.service: Deactivated successfully. Jan 28 00:57:45.791090 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 00:57:45.792497 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Jan 28 00:57:45.794784 systemd-logind[1461]: Removed session 15. Jan 28 00:57:45.910349 kubelet[2539]: E0128 00:57:45.910093 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:57:47.907639 kubelet[2539]: E0128 00:57:47.907072 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:57:48.906077 kubelet[2539]: E0128 00:57:48.905952 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:57:49.906865 kubelet[2539]: E0128 00:57:49.906723 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:57:50.801099 systemd[1]: Started sshd@15-10.0.0.11:22-10.0.0.1:47840.service - OpenSSH per-connection server daemon (10.0.0.1:47840). Jan 28 00:57:50.871008 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 47840 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:50.874305 sshd[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:50.883852 systemd-logind[1461]: New session 16 of user core. Jan 28 00:57:50.890217 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 00:57:51.077515 sshd[5750]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:51.083547 systemd[1]: sshd@15-10.0.0.11:22-10.0.0.1:47840.service: Deactivated successfully. Jan 28 00:57:51.087035 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 00:57:51.088296 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Jan 28 00:57:51.089602 systemd-logind[1461]: Removed session 16. Jan 28 00:57:53.911940 kubelet[2539]: E0128 00:57:53.911793 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:57:56.099384 systemd[1]: Started sshd@16-10.0.0.11:22-10.0.0.1:48380.service - OpenSSH per-connection server daemon (10.0.0.1:48380). Jan 28 00:57:56.184342 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 48380 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:57:56.187422 sshd[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:56.196056 systemd-logind[1461]: New session 17 of user core. Jan 28 00:57:56.202431 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 00:57:56.380393 sshd[5772]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:56.386551 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Jan 28 00:57:56.387092 systemd[1]: sshd@16-10.0.0.11:22-10.0.0.1:48380.service: Deactivated successfully. Jan 28 00:57:56.390251 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 00:57:56.395469 systemd-logind[1461]: Removed session 17. Jan 28 00:57:56.989569 kubelet[2539]: E0128 00:57:56.989213 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:57:57.907288 kubelet[2539]: E0128 00:57:57.906485 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:57.917420 kubelet[2539]: E0128 00:57:57.917212 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:58:01.083842 kubelet[2539]: E0128 00:58:01.083632 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:58:01.088277 containerd[1472]: time="2026-01-28T00:58:01.085216607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:58:01.446460 systemd[1]: Started sshd@17-10.0.0.11:22-10.0.0.1:48388.service - OpenSSH per-connection server daemon (10.0.0.1:48388). Jan 28 00:58:01.456207 containerd[1472]: time="2026-01-28T00:58:01.456122172Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:01.458032 containerd[1472]: time="2026-01-28T00:58:01.457976344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:58:01.458621 containerd[1472]: time="2026-01-28T00:58:01.458077568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:58:01.459256 kubelet[2539]: E0128 00:58:01.459129 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:58:01.459576 kubelet[2539]: E0128 00:58:01.459242 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:58:01.459576 kubelet[2539]: E0128 00:58:01.459361 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5ff6b57675-s9qlm_calico-system(0d4e568d-f278-4de9-a835-c39874b224a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:01.459576 kubelet[2539]: E0128 00:58:01.459394 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:58:01.505127 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 48388 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:01.509352 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:01.602796 systemd-logind[1461]: New session 18 of user core. Jan 28 00:58:01.611294 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 00:58:01.880864 sshd[5790]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:01.888593 systemd[1]: sshd@17-10.0.0.11:22-10.0.0.1:48388.service: Deactivated successfully. Jan 28 00:58:01.893757 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 00:58:01.896546 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Jan 28 00:58:01.898831 systemd-logind[1461]: Removed session 18. Jan 28 00:58:01.905477 kubelet[2539]: E0128 00:58:01.905350 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:01.908123 containerd[1472]: time="2026-01-28T00:58:01.908008459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:58:01.989498 containerd[1472]: time="2026-01-28T00:58:01.989405949Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:01.991389 containerd[1472]: time="2026-01-28T00:58:01.991316219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:58:01.991517 containerd[1472]: time="2026-01-28T00:58:01.991388640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:01.991824 kubelet[2539]: E0128 00:58:01.991725 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:01.991887 kubelet[2539]: E0128 00:58:01.991819 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:01.992100 kubelet[2539]: E0128 00:58:01.992034 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-zhbk8_calico-apiserver(db3e5b3d-8d48-4187-bdaf-770b7259aaa2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:01.992144 kubelet[2539]: E0128 00:58:01.992115 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:58:04.908161 kubelet[2539]: E0128 00:58:04.908064 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:58:06.897548 systemd[1]: Started sshd@18-10.0.0.11:22-10.0.0.1:44012.service - OpenSSH per-connection server daemon (10.0.0.1:44012). Jan 28 00:58:06.942314 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 44012 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:06.946169 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:06.955819 systemd-logind[1461]: New session 19 of user core. Jan 28 00:58:06.960171 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 00:58:07.112447 sshd[5813]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:07.119981 systemd[1]: sshd@18-10.0.0.11:22-10.0.0.1:44012.service: Deactivated successfully. Jan 28 00:58:07.123097 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 00:58:07.124458 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Jan 28 00:58:07.125839 systemd-logind[1461]: Removed session 19. Jan 28 00:58:07.904617 kubelet[2539]: E0128 00:58:07.904497 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:07.905451 kubelet[2539]: E0128 00:58:07.905406 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:08.905607 containerd[1472]: time="2026-01-28T00:58:08.905565393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:58:09.010793 containerd[1472]: time="2026-01-28T00:58:09.010654001Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:09.012591 containerd[1472]: time="2026-01-28T00:58:09.012328776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:58:09.012812 containerd[1472]: time="2026-01-28T00:58:09.012526093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:09.013166 kubelet[2539]: E0128 00:58:09.013090 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:09.013572 kubelet[2539]: E0128 00:58:09.013166 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:09.013572 kubelet[2539]: E0128 00:58:09.013282 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8f958c6dc-2kx8s_calico-apiserver(71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:09.013572 kubelet[2539]: E0128 00:58:09.013317 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:58:10.906377 containerd[1472]: time="2026-01-28T00:58:10.906206636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:58:10.983624 containerd[1472]: time="2026-01-28T00:58:10.983523992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:10.985352 containerd[1472]: time="2026-01-28T00:58:10.985183356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:58:10.985478 containerd[1472]: time="2026-01-28T00:58:10.985279074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:58:10.985797 kubelet[2539]: E0128 00:58:10.985706 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:58:10.986553 kubelet[2539]: E0128 00:58:10.985800 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:58:10.986553 kubelet[2539]: E0128 00:58:10.985961 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:10.993161 containerd[1472]: time="2026-01-28T00:58:10.993074710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:58:11.075615 containerd[1472]: time="2026-01-28T00:58:11.075546378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:11.078109 containerd[1472]: time="2026-01-28T00:58:11.077852392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:58:11.078504 containerd[1472]: time="2026-01-28T00:58:11.077996387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:58:11.078624 kubelet[2539]: E0128 00:58:11.078543 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:58:11.078957 kubelet[2539]: E0128 00:58:11.078628 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:58:11.078957 kubelet[2539]: E0128 00:58:11.078776 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-864d6fb5f6-7sb5q_calico-system(251978dd-1b11-4c38-8024-bd42a42999a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:11.078957 kubelet[2539]: E0128 00:58:11.078839 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:58:11.907974 kubelet[2539]: E0128 00:58:11.907809 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:58:12.141527 systemd[1]: Started sshd@19-10.0.0.11:22-10.0.0.1:44022.service - OpenSSH per-connection server daemon (10.0.0.1:44022). Jan 28 00:58:12.212049 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 44022 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:12.213961 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:12.222047 systemd-logind[1461]: New session 20 of user core. Jan 28 00:58:12.231080 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 00:58:12.383233 sshd[5828]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:12.388860 systemd[1]: sshd@19-10.0.0.11:22-10.0.0.1:44022.service: Deactivated successfully. Jan 28 00:58:12.392840 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 00:58:12.394761 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. Jan 28 00:58:12.397037 systemd-logind[1461]: Removed session 20. Jan 28 00:58:13.906424 kubelet[2539]: E0128 00:58:13.906326 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:58:15.905615 containerd[1472]: time="2026-01-28T00:58:15.905462568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:58:16.036826 containerd[1472]: time="2026-01-28T00:58:16.036758499Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:16.038308 containerd[1472]: time="2026-01-28T00:58:16.038165101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:58:16.038308 containerd[1472]: time="2026-01-28T00:58:16.038253506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:16.038642 kubelet[2539]: E0128 00:58:16.038533 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:58:16.038642 kubelet[2539]: E0128 00:58:16.038657 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:58:16.039218 kubelet[2539]: E0128 00:58:16.038779 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-x77cp_calico-system(0417d323-0fbe-457b-a078-73d52ee9f54e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:16.039218 kubelet[2539]: E0128 00:58:16.038816 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:58:17.403422 systemd[1]: Started sshd@20-10.0.0.11:22-10.0.0.1:56214.service - OpenSSH per-connection server daemon (10.0.0.1:56214). Jan 28 00:58:17.471770 sshd[5885]: Accepted publickey for core from 10.0.0.1 port 56214 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:17.475230 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:17.484469 systemd-logind[1461]: New session 21 of user core. Jan 28 00:58:17.492237 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 00:58:17.670804 sshd[5885]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:17.684357 systemd[1]: sshd@20-10.0.0.11:22-10.0.0.1:56214.service: Deactivated successfully. Jan 28 00:58:17.688210 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 00:58:17.691003 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. Jan 28 00:58:17.701725 systemd[1]: Started sshd@21-10.0.0.11:22-10.0.0.1:56226.service - OpenSSH per-connection server daemon (10.0.0.1:56226). Jan 28 00:58:17.706609 systemd-logind[1461]: Removed session 21. Jan 28 00:58:17.757959 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 56226 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:17.760500 sshd[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:17.776147 systemd-logind[1461]: New session 22 of user core. Jan 28 00:58:17.779282 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 00:58:18.269330 sshd[5900]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:18.284610 systemd[1]: sshd@21-10.0.0.11:22-10.0.0.1:56226.service: Deactivated successfully. Jan 28 00:58:18.289079 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 00:58:18.290876 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit. Jan 28 00:58:18.298815 systemd[1]: Started sshd@22-10.0.0.11:22-10.0.0.1:56228.service - OpenSSH per-connection server daemon (10.0.0.1:56228). Jan 28 00:58:18.300703 systemd-logind[1461]: Removed session 22. Jan 28 00:58:18.381281 sshd[5913]: Accepted publickey for core from 10.0.0.1 port 56228 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:18.383219 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:18.391749 systemd-logind[1461]: New session 23 of user core. Jan 28 00:58:18.398266 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 00:58:19.109835 sshd[5913]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:19.122653 systemd[1]: sshd@22-10.0.0.11:22-10.0.0.1:56228.service: Deactivated successfully. Jan 28 00:58:19.127295 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 00:58:19.135407 systemd-logind[1461]: Session 23 logged out. Waiting for processes to exit. Jan 28 00:58:19.152150 systemd[1]: Started sshd@23-10.0.0.11:22-10.0.0.1:56238.service - OpenSSH per-connection server daemon (10.0.0.1:56238). Jan 28 00:58:19.156994 systemd-logind[1461]: Removed session 23. Jan 28 00:58:19.212811 sshd[5931]: Accepted publickey for core from 10.0.0.1 port 56238 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:19.215614 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:19.225050 systemd-logind[1461]: New session 24 of user core. Jan 28 00:58:19.234202 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 00:58:19.593293 sshd[5931]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:19.604965 systemd[1]: sshd@23-10.0.0.11:22-10.0.0.1:56238.service: Deactivated successfully. Jan 28 00:58:19.609884 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 00:58:19.613560 systemd-logind[1461]: Session 24 logged out. Waiting for processes to exit. Jan 28 00:58:19.627505 systemd[1]: Started sshd@24-10.0.0.11:22-10.0.0.1:56246.service - OpenSSH per-connection server daemon (10.0.0.1:56246). Jan 28 00:58:19.633201 systemd-logind[1461]: Removed session 24. Jan 28 00:58:19.673228 sshd[5943]: Accepted publickey for core from 10.0.0.1 port 56246 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:19.677480 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:19.694108 systemd-logind[1461]: New session 25 of user core. Jan 28 00:58:19.704312 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 00:58:19.882428 sshd[5943]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:19.889312 systemd[1]: sshd@24-10.0.0.11:22-10.0.0.1:56246.service: Deactivated successfully. Jan 28 00:58:19.894109 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 00:58:19.895842 systemd-logind[1461]: Session 25 logged out. Waiting for processes to exit. Jan 28 00:58:19.897621 systemd-logind[1461]: Removed session 25. Jan 28 00:58:19.907645 kubelet[2539]: E0128 00:58:19.907539 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:58:19.910851 containerd[1472]: time="2026-01-28T00:58:19.910351973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:58:20.054335 containerd[1472]: time="2026-01-28T00:58:20.054246751Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:20.056458 containerd[1472]: time="2026-01-28T00:58:20.056266619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:58:20.056458 containerd[1472]: time="2026-01-28T00:58:20.056349307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:58:20.057049 kubelet[2539]: E0128 00:58:20.056987 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:58:20.057049 kubelet[2539]: E0128 00:58:20.057045 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:58:20.057628 kubelet[2539]: E0128 00:58:20.057123 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:20.059368 containerd[1472]: time="2026-01-28T00:58:20.059274748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:58:20.205380 containerd[1472]: time="2026-01-28T00:58:20.204833166Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:20.206991 containerd[1472]: time="2026-01-28T00:58:20.206861440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:58:20.207150 containerd[1472]: time="2026-01-28T00:58:20.206966751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:58:20.207519 kubelet[2539]: E0128 00:58:20.207447 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:58:20.207519 kubelet[2539]: E0128 00:58:20.207520 2539 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:58:20.207664 kubelet[2539]: E0128 00:58:20.207627 2539 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5dcgj_calico-system(bc0d5231-81be-4bd5-ba52-4066772e339a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:20.207848 kubelet[2539]: E0128 00:58:20.207676 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:58:21.905177 kubelet[2539]: E0128 00:58:21.905117 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:22.907690 kubelet[2539]: E0128 00:58:22.907609 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:58:24.910421 systemd[1]: Started sshd@25-10.0.0.11:22-10.0.0.1:40494.service - OpenSSH per-connection server daemon (10.0.0.1:40494). Jan 28 00:58:25.115675 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 40494 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:25.118718 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:25.127167 systemd-logind[1461]: New session 26 of user core. Jan 28 00:58:25.138093 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 00:58:25.323769 sshd[5959]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:25.338689 systemd[1]: sshd@25-10.0.0.11:22-10.0.0.1:40494.service: Deactivated successfully. Jan 28 00:58:25.360751 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 00:58:25.363724 systemd-logind[1461]: Session 26 logged out. Waiting for processes to exit. Jan 28 00:58:25.366562 systemd-logind[1461]: Removed session 26. Jan 28 00:58:25.937025 kubelet[2539]: E0128 00:58:25.936524 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:58:25.937025 kubelet[2539]: E0128 00:58:25.936496 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:58:28.908692 kubelet[2539]: E0128 00:58:28.908220 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e" Jan 28 00:58:30.360055 systemd[1]: Started sshd@26-10.0.0.11:22-10.0.0.1:40506.service - OpenSSH per-connection server daemon (10.0.0.1:40506). Jan 28 00:58:30.417720 sshd[5976]: Accepted publickey for core from 10.0.0.1 port 40506 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:30.420532 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:30.430728 systemd-logind[1461]: New session 27 of user core. Jan 28 00:58:30.446149 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 00:58:30.650153 sshd[5976]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:30.655081 systemd[1]: sshd@26-10.0.0.11:22-10.0.0.1:40506.service: Deactivated successfully. Jan 28 00:58:30.658682 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 00:58:30.660562 systemd-logind[1461]: Session 27 logged out. Waiting for processes to exit. Jan 28 00:58:30.663120 systemd-logind[1461]: Removed session 27. Jan 28 00:58:30.911006 kubelet[2539]: E0128 00:58:30.910716 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5dcgj" podUID="bc0d5231-81be-4bd5-ba52-4066772e339a" Jan 28 00:58:31.906170 kubelet[2539]: E0128 00:58:31.905881 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-2kx8s" podUID="71b0d016-5a81-42fa-b2b3-99cd7fbb3ba4" Jan 28 00:58:33.904868 kubelet[2539]: E0128 00:58:33.904559 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:35.677455 systemd[1]: Started sshd@27-10.0.0.11:22-10.0.0.1:53660.service - OpenSSH per-connection server daemon (10.0.0.1:53660). Jan 28 00:58:35.719634 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 53660 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:35.723569 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:35.734355 systemd-logind[1461]: New session 28 of user core. Jan 28 00:58:35.739449 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 00:58:35.896289 sshd[5994]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:35.902063 systemd[1]: sshd@27-10.0.0.11:22-10.0.0.1:53660.service: Deactivated successfully. Jan 28 00:58:35.906885 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 00:58:35.908516 systemd-logind[1461]: Session 28 logged out. Waiting for processes to exit. Jan 28 00:58:35.910632 systemd-logind[1461]: Removed session 28. Jan 28 00:58:36.905955 kubelet[2539]: E0128 00:58:36.905816 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ff6b57675-s9qlm" podUID="0d4e568d-f278-4de9-a835-c39874b224a5" Jan 28 00:58:36.907026 kubelet[2539]: E0128 00:58:36.906687 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-864d6fb5f6-7sb5q" podUID="251978dd-1b11-4c38-8024-bd42a42999a9" Jan 28 00:58:37.907472 kubelet[2539]: E0128 00:58:37.907383 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f958c6dc-zhbk8" podUID="db3e5b3d-8d48-4187-bdaf-770b7259aaa2" Jan 28 00:58:37.908797 kubelet[2539]: E0128 00:58:37.908354 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:40.915003 systemd[1]: Started sshd@28-10.0.0.11:22-10.0.0.1:53672.service - OpenSSH per-connection server daemon (10.0.0.1:53672). Jan 28 00:58:40.971606 sshd[6009]: Accepted publickey for core from 10.0.0.1 port 53672 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:40.973822 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:40.980107 systemd-logind[1461]: New session 29 of user core. Jan 28 00:58:40.986125 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 00:58:41.132378 sshd[6009]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:41.139288 systemd-logind[1461]: Session 29 logged out. Waiting for processes to exit. Jan 28 00:58:41.139783 systemd[1]: sshd@28-10.0.0.11:22-10.0.0.1:53672.service: Deactivated successfully. Jan 28 00:58:41.142603 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 00:58:41.145713 systemd-logind[1461]: Removed session 29. Jan 28 00:58:41.905434 kubelet[2539]: E0128 00:58:41.905378 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:42.906479 kubelet[2539]: E0128 00:58:42.905196 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x77cp" podUID="0417d323-0fbe-457b-a078-73d52ee9f54e"