Jan 17 00:49:18.560509 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:49:18.560537 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:49:18.565695 kernel: BIOS-provided physical RAM map: Jan 17 00:49:18.565716 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:49:18.565726 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:49:18.565735 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:49:18.565746 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 17 00:49:18.565756 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 17 00:49:18.565768 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:49:18.565781 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 00:49:18.565789 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:49:18.565797 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:49:18.565806 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 00:49:18.565817 kernel: NX (Execute Disable) protection: active Jan 17 00:49:18.565828 kernel: APIC: Static calls initialized Jan 17 00:49:18.565840 kernel: SMBIOS 2.8 present. Jan 17 00:49:18.565850 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 17 00:49:18.565860 kernel: Hypervisor detected: KVM Jan 17 00:49:18.565870 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:49:18.565880 kernel: kvm-clock: using sched offset of 7154707413 cycles Jan 17 00:49:18.565890 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:49:18.565901 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:49:18.565911 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:49:18.565921 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:49:18.565935 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 17 00:49:18.565945 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:49:18.565955 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:49:18.565965 kernel: Using GB pages for direct mapping Jan 17 00:49:18.565975 kernel: ACPI: Early table checksum verification disabled Jan 17 00:49:18.565985 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 17 00:49:18.565995 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:49:18.566005 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:49:18.566015 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:49:18.566028 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 17 00:49:18.566038 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:49:18.566047 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:49:18.566057 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:49:18.566073 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:49:18.566085 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 17 00:49:18.566096 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 17 00:49:18.566111 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 17 00:49:18.566124 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 17 00:49:18.566135 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 17 00:49:18.566145 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 17 00:49:18.566156 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 17 00:49:18.566166 kernel: No NUMA configuration found Jan 17 00:49:18.566177 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 17 00:49:18.566190 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 17 00:49:18.566201 kernel: Zone ranges: Jan 17 00:49:18.566211 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:49:18.566303 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 17 00:49:18.566314 kernel: Normal empty Jan 17 00:49:18.566324 kernel: Movable zone start for each node Jan 17 00:49:18.566335 kernel: Early memory node ranges Jan 17 00:49:18.566345 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:49:18.566356 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 17 00:49:18.566367 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 17 00:49:18.566381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:49:18.566392 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:49:18.566402 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 17 00:49:18.566413 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:49:18.566424 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:49:18.566435 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:49:18.566445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:49:18.566456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:49:18.566466 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:49:18.566481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:49:18.566491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:49:18.566502 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:49:18.566512 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:49:18.566523 kernel: TSC deadline timer available Jan 17 00:49:18.566533 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:49:18.566544 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:49:18.566616 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:49:18.566628 kernel: kvm-guest: setup PV sched yield Jan 17 00:49:18.566644 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 00:49:18.566654 kernel: Booting paravirtualized kernel on KVM Jan 17 00:49:18.566665 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:49:18.566675 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:49:18.566686 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:49:18.566696 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:49:18.566707 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:49:18.566717 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:49:18.566727 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:49:18.566743 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:49:18.566754 kernel: random: crng init done Jan 17 00:49:18.566766 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:49:18.566776 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:49:18.566785 kernel: Fallback order for Node 0: 0 Jan 17 00:49:18.566794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 17 00:49:18.566804 kernel: Policy zone: DMA32 Jan 17 00:49:18.566817 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:49:18.566830 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 17 00:49:18.566841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:49:18.566851 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:49:18.566862 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:49:18.566872 kernel: Dynamic Preempt: voluntary Jan 17 00:49:18.566883 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:49:18.566898 kernel: rcu: RCU event tracing is enabled. Jan 17 00:49:18.566909 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:49:18.566920 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:49:18.566934 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:49:18.566945 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:49:18.566956 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:49:18.566966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:49:18.566976 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:49:18.566987 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:49:18.566997 kernel: Console: colour VGA+ 80x25 Jan 17 00:49:18.567008 kernel: printk: console [ttyS0] enabled Jan 17 00:49:18.567018 kernel: ACPI: Core revision 20230628 Jan 17 00:49:18.567032 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:49:18.567042 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:49:18.567053 kernel: x2apic enabled Jan 17 00:49:18.567063 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:49:18.567074 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:49:18.567084 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:49:18.567095 kernel: kvm-guest: setup PV IPIs Jan 17 00:49:18.567106 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:49:18.567129 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:49:18.567140 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:49:18.567151 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:49:18.567162 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:49:18.567176 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:49:18.567187 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:49:18.567198 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:49:18.567209 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:49:18.567295 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:49:18.567312 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:49:18.567324 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:49:18.567335 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:49:18.567347 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:49:18.567358 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:49:18.567369 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:49:18.567380 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:49:18.567391 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:49:18.567406 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:49:18.567417 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:49:18.567428 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:49:18.567440 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:49:18.567450 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:49:18.567462 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:49:18.567473 kernel: landlock: Up and running. Jan 17 00:49:18.567484 kernel: SELinux: Initializing. Jan 17 00:49:18.567495 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:49:18.567510 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:49:18.567521 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:49:18.567532 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:49:18.567543 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:49:18.567608 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:49:18.567620 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:49:18.567630 kernel: signal: max sigframe size: 1776 Jan 17 00:49:18.567643 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:49:18.567655 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:49:18.567669 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:49:18.567680 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:49:18.567691 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:49:18.567702 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:49:18.567713 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:49:18.567724 kernel: smpboot: Max logical packages: 1 Jan 17 00:49:18.567735 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:49:18.567746 kernel: devtmpfs: initialized Jan 17 00:49:18.567758 kernel: x86/mm: Memory block size: 128MB Jan 17 00:49:18.567774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:49:18.567783 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:49:18.567794 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:49:18.567807 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:49:18.567817 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:49:18.567826 kernel: audit: type=2000 audit(1768610956.182:1): state=initialized audit_enabled=0 res=1 Jan 17 00:49:18.567837 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:49:18.567848 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:49:18.567859 kernel: cpuidle: using governor menu Jan 17 00:49:18.567875 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:49:18.567886 kernel: dca service started, version 1.12.1 Jan 17 00:49:18.567897 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:49:18.567908 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:49:18.567919 kernel: PCI: Using configuration type 1 for base access Jan 17 00:49:18.567930 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:49:18.567942 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:49:18.567953 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:49:18.567964 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:49:18.567979 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:49:18.567990 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:49:18.568001 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:49:18.568012 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:49:18.568023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:49:18.568034 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:49:18.568045 kernel: ACPI: Interpreter enabled Jan 17 00:49:18.568056 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:49:18.568067 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:49:18.568082 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:49:18.568093 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:49:18.568105 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:49:18.568115 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:49:18.568448 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:49:18.569863 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:49:18.570044 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:49:18.570064 kernel: PCI host bridge to bus 0000:00 Jan 17 00:49:18.570323 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:49:18.570486 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:49:18.570837 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:49:18.570998 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:49:18.571148 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:49:18.571464 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 17 00:49:18.571782 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:49:18.571981 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:49:18.572161 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:49:18.572491 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 17 00:49:18.575835 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 17 00:49:18.576013 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 17 00:49:18.576187 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:49:18.576471 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:49:18.576705 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 00:49:18.576885 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 17 00:49:18.577500 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 17 00:49:18.577754 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:49:18.577933 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 00:49:18.578105 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 17 00:49:18.578378 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 17 00:49:18.580450 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:49:18.580704 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 17 00:49:18.580885 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 17 00:49:18.581053 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 17 00:49:18.581298 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 17 00:49:18.581492 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:49:18.582133 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:49:18.582415 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:49:18.582655 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 17 00:49:18.582836 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 17 00:49:18.583017 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:49:18.583182 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 00:49:18.583203 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:49:18.583297 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:49:18.583311 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:49:18.583323 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:49:18.583334 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:49:18.583345 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:49:18.583356 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:49:18.583367 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:49:18.583378 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:49:18.583394 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:49:18.583405 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:49:18.583416 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:49:18.583427 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:49:18.583438 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:49:18.583449 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:49:18.583460 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:49:18.583471 kernel: iommu: Default domain type: Translated Jan 17 00:49:18.583482 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:49:18.583498 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:49:18.583509 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:49:18.583520 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:49:18.583531 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 17 00:49:18.583762 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:49:18.583937 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:49:18.584102 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:49:18.584117 kernel: vgaarb: loaded Jan 17 00:49:18.584134 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:49:18.584146 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:49:18.584157 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:49:18.584167 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:49:18.584179 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:49:18.584191 kernel: pnp: PnP ACPI init Jan 17 00:49:18.584458 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:49:18.584477 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:49:18.584494 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:49:18.584506 kernel: NET: Registered PF_INET protocol family Jan 17 00:49:18.584517 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:49:18.584528 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:49:18.584540 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:49:18.584551 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:49:18.584619 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:49:18.584631 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:49:18.584642 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:49:18.584657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:49:18.584669 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:49:18.584680 kernel: NET: Registered PF_XDP protocol family Jan 17 00:49:18.584843 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:49:18.585088 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:49:18.585327 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:49:18.585481 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:49:18.588105 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:49:18.588428 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 17 00:49:18.588446 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:49:18.588458 kernel: Initialise system trusted keyrings Jan 17 00:49:18.588469 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:49:18.588481 kernel: Key type asymmetric registered Jan 17 00:49:18.588491 kernel: Asymmetric key parser 'x509' registered Jan 17 00:49:18.588502 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:49:18.588513 kernel: io scheduler mq-deadline registered Jan 17 00:49:18.588525 kernel: io scheduler kyber registered Jan 17 00:49:18.588540 kernel: io scheduler bfq registered Jan 17 00:49:18.588551 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:49:18.588621 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:49:18.588633 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:49:18.588646 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:49:18.588658 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:49:18.588668 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:49:18.588678 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:49:18.588690 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:49:18.588706 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:49:18.588717 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:49:18.590454 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:49:18.590682 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:49:18.590850 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:49:17 UTC (1768610957) Jan 17 00:49:18.591010 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:49:18.591025 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:49:18.591037 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:49:18.591053 kernel: Segment Routing with IPv6 Jan 17 00:49:18.591064 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:49:18.591076 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:49:18.591087 kernel: Key type dns_resolver registered Jan 17 00:49:18.591098 kernel: IPI shorthand broadcast: enabled Jan 17 00:49:18.591109 kernel: sched_clock: Marking stable (1565036627, 637246183)->(2881182517, -678899707) Jan 17 00:49:18.591120 kernel: registered taskstats version 1 Jan 17 00:49:18.591131 kernel: Loading compiled-in X.509 certificates Jan 17 00:49:18.591142 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:49:18.591157 kernel: Key type .fscrypt registered Jan 17 00:49:18.591168 kernel: Key type fscrypt-provisioning registered Jan 17 00:49:18.591179 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:49:18.591190 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:49:18.591202 kernel: ima: No architecture policies found Jan 17 00:49:18.591212 kernel: clk: Disabling unused clocks Jan 17 00:49:18.591305 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:49:18.591316 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:49:18.591327 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:49:18.591343 kernel: Run /init as init process Jan 17 00:49:18.591354 kernel: with arguments: Jan 17 00:49:18.591365 kernel: /init Jan 17 00:49:18.591376 kernel: with environment: Jan 17 00:49:18.591387 kernel: HOME=/ Jan 17 00:49:18.591398 kernel: TERM=linux Jan 17 00:49:18.591412 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:49:18.591426 systemd[1]: Detected virtualization kvm. Jan 17 00:49:18.591442 systemd[1]: Detected architecture x86-64. Jan 17 00:49:18.591453 systemd[1]: Running in initrd. Jan 17 00:49:18.591465 systemd[1]: No hostname configured, using default hostname. Jan 17 00:49:18.591476 systemd[1]: Hostname set to . Jan 17 00:49:18.591488 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:49:18.591499 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:49:18.591511 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:49:18.591523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:49:18.591539 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:49:18.591551 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:49:18.591614 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:49:18.591625 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:49:18.591641 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:49:18.591653 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:49:18.591664 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:49:18.591680 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:49:18.591692 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:49:18.591704 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:49:18.591716 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:49:18.591743 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:49:18.591760 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:49:18.591776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:49:18.591787 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:49:18.591799 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:49:18.591813 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:49:18.591824 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:49:18.591835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:49:18.591847 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:49:18.591860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:49:18.591872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:49:18.591888 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:49:18.591901 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:49:18.591913 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:49:18.591925 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:49:18.591937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:49:18.591978 systemd-journald[195]: Collecting audit messages is disabled. Jan 17 00:49:18.592011 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:49:18.592024 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:49:18.592036 systemd-journald[195]: Journal started Jan 17 00:49:18.592064 systemd-journald[195]: Runtime Journal (/run/log/journal/ce1824dc1b194e699c90d05eef535430) is 6.0M, max 48.4M, 42.3M free. Jan 17 00:49:18.603421 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:49:18.622182 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:49:18.642846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:49:18.977495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:49:18.977541 kernel: Bridge firewalling registered Jan 17 00:49:18.646743 systemd-modules-load[196]: Inserted module 'overlay' Jan 17 00:49:18.730839 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 17 00:49:18.995327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:49:19.021356 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:49:19.034377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:49:19.053959 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:49:19.064712 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:49:19.109905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:49:19.130181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:49:19.136421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:49:19.166551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:49:19.172111 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:49:19.207414 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:49:19.224533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:49:19.267669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:49:19.286339 dracut-cmdline[226]: dracut-dracut-053 Jan 17 00:49:19.290530 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:49:19.406910 systemd-resolved[233]: Positive Trust Anchors: Jan 17 00:49:19.407012 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:49:19.407053 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:49:19.439391 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 17 00:49:19.479027 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:49:19.491774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:49:19.656669 kernel: SCSI subsystem initialized Jan 17 00:49:19.679662 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:49:19.729057 kernel: iscsi: registered transport (tcp) Jan 17 00:49:19.781004 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:49:19.781321 kernel: QLogic iSCSI HBA Driver Jan 17 00:49:19.953815 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:49:19.991732 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:49:20.074761 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:49:20.074957 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:49:20.081674 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:49:20.187033 kernel: raid6: avx2x4 gen() 19249 MB/s Jan 17 00:49:20.208126 kernel: raid6: avx2x2 gen() 17550 MB/s Jan 17 00:49:20.232874 kernel: raid6: avx2x1 gen() 7312 MB/s Jan 17 00:49:20.232969 kernel: raid6: using algorithm avx2x4 gen() 19249 MB/s Jan 17 00:49:20.258500 kernel: raid6: .... xor() 4259 MB/s, rmw enabled Jan 17 00:49:20.258740 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:49:20.295477 kernel: xor: automatically using best checksumming function avx Jan 17 00:49:20.761747 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:49:20.812728 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:49:20.845494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:49:20.879099 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 17 00:49:20.891742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:49:20.937469 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:49:21.029804 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 17 00:49:21.154933 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:49:21.179441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:49:21.362310 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:49:21.408738 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:49:21.451153 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:49:21.464469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:49:21.473899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:49:21.480791 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:49:21.553818 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:49:21.572123 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:49:21.572186 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:49:21.579507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:49:21.597178 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:49:21.620888 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:49:21.620966 kernel: GPT:9289727 != 19775487 Jan 17 00:49:21.620985 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:49:21.624710 kernel: GPT:9289727 != 19775487 Jan 17 00:49:21.627725 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:49:21.633713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:49:21.634420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:49:21.634707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:49:21.655303 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:49:21.667920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:49:21.672509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:49:21.694430 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:49:21.770942 kernel: libata version 3.00 loaded. Jan 17 00:49:21.770118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:49:21.808123 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:49:21.829551 kernel: AES CTR mode by8 optimization enabled Jan 17 00:49:21.846645 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:49:21.862426 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Jan 17 00:49:21.865323 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (474) Jan 17 00:49:21.868381 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:49:21.869384 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:49:21.871086 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:49:22.160964 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:49:22.161436 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:49:22.161938 kernel: scsi host0: ahci Jan 17 00:49:22.162329 kernel: scsi host1: ahci Jan 17 00:49:22.162641 kernel: scsi host2: ahci Jan 17 00:49:22.162951 kernel: scsi host3: ahci Jan 17 00:49:22.163152 kernel: scsi host4: ahci Jan 17 00:49:22.164973 kernel: scsi host5: ahci Jan 17 00:49:22.165176 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 17 00:49:22.165193 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 17 00:49:22.165304 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 17 00:49:22.165324 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 17 00:49:22.165339 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 17 00:49:22.165354 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 17 00:49:22.171378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:49:22.185903 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:49:22.193920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:49:22.236862 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:49:22.236902 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:49:22.236918 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:49:22.236086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:49:22.287709 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:49:22.287751 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:49:22.287766 kernel: ata3.00: applying bridge limits Jan 17 00:49:22.287780 kernel: ata3.00: configured for UDMA/100 Jan 17 00:49:22.287794 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:49:22.287807 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:49:22.287821 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:49:22.298855 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:49:22.324033 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:49:22.346863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:49:22.346894 disk-uuid[557]: Primary Header is updated. Jan 17 00:49:22.346894 disk-uuid[557]: Secondary Entries is updated. Jan 17 00:49:22.346894 disk-uuid[557]: Secondary Header is updated. Jan 17 00:49:22.363049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:49:22.369492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:49:22.451381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:49:22.491804 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:49:22.492114 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:49:22.520697 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:49:23.364490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:49:23.365713 disk-uuid[558]: The operation has completed successfully. Jan 17 00:49:23.425695 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:49:23.425931 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:49:23.469668 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:49:23.483151 sh[596]: Success Jan 17 00:49:23.518376 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:49:23.586025 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:49:23.618057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:49:23.626098 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:49:23.661182 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:49:23.661322 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:49:23.661355 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:49:23.668533 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:49:23.672878 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:49:23.700649 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:49:23.703981 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:49:23.722674 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:49:23.727300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:49:23.749944 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:49:23.749991 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:49:23.750010 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:49:23.760670 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:49:23.775681 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:49:23.785655 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:49:23.794665 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:49:23.809507 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:49:23.907154 ignition[689]: Ignition 2.19.0 Jan 17 00:49:23.907279 ignition[689]: Stage: fetch-offline Jan 17 00:49:23.907383 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:49:23.907403 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:49:23.907532 ignition[689]: parsed url from cmdline: "" Jan 17 00:49:23.907538 ignition[689]: no config URL provided Jan 17 00:49:23.907548 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:49:23.907754 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:49:23.907795 ignition[689]: op(1): [started] loading QEMU firmware config module Jan 17 00:49:23.907804 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:49:23.949117 ignition[689]: op(1): [finished] loading QEMU firmware config module Jan 17 00:49:24.003808 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:49:24.040639 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:49:24.086890 systemd-networkd[784]: lo: Link UP Jan 17 00:49:24.086929 systemd-networkd[784]: lo: Gained carrier Jan 17 00:49:24.089314 systemd-networkd[784]: Enumeration completed Jan 17 00:49:24.090316 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:49:24.090877 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:49:24.090882 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:49:24.095098 systemd-networkd[784]: eth0: Link UP Jan 17 00:49:24.095104 systemd-networkd[784]: eth0: Gained carrier Jan 17 00:49:24.095114 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:49:24.095975 systemd[1]: Reached target network.target - Network. Jan 17 00:49:24.133465 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.159/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:49:24.299312 ignition[689]: parsing config with SHA512: d5b4d3bcc27793544af2feb9d6416a4c04689d73b7f99fd62b31bb1a05ae4b0fca03a3f587ca545414c6b1097ef22a9b9454eab7978ad9bb00ec9e0a4355707e Jan 17 00:49:24.313040 unknown[689]: fetched base config from "system" Jan 17 00:49:24.313371 unknown[689]: fetched user config from "qemu" Jan 17 00:49:24.323005 ignition[689]: fetch-offline: fetch-offline passed Jan 17 00:49:24.323366 ignition[689]: Ignition finished successfully Jan 17 00:49:24.326868 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:49:24.341549 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:49:24.367898 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:49:24.395732 ignition[789]: Ignition 2.19.0 Jan 17 00:49:24.395963 ignition[789]: Stage: kargs Jan 17 00:49:24.398184 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:49:24.398209 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:49:24.401072 ignition[789]: kargs: kargs passed Jan 17 00:49:24.401150 ignition[789]: Ignition finished successfully Jan 17 00:49:24.421096 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:49:24.439985 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:49:24.466549 ignition[797]: Ignition 2.19.0 Jan 17 00:49:24.466629 ignition[797]: Stage: disks Jan 17 00:49:24.466905 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:49:24.466926 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:49:24.468698 ignition[797]: disks: disks passed Jan 17 00:49:24.468761 ignition[797]: Ignition finished successfully Jan 17 00:49:24.504879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:49:24.507089 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:49:24.527650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:49:24.531768 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:49:24.547733 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:49:24.547953 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:49:24.570552 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:49:24.606427 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:49:24.619068 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:49:24.652426 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:49:24.908338 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:49:24.909698 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:49:24.912165 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:49:24.942457 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:49:24.952500 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:49:24.963376 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 17 00:49:24.963405 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:49:24.953098 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:49:24.985744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:49:24.985773 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:49:24.985789 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:49:24.953164 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:49:24.953314 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:49:24.992030 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:49:25.001319 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:49:25.023710 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:49:25.115115 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:49:25.131016 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:49:25.146921 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:49:25.159104 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:49:25.359489 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:49:25.380942 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:49:25.396037 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:49:25.417859 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:49:25.432899 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:49:25.475103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:49:25.487454 ignition[928]: INFO : Ignition 2.19.0 Jan 17 00:49:25.487454 ignition[928]: INFO : Stage: mount Jan 17 00:49:25.487454 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:49:25.487454 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:49:25.505709 ignition[928]: INFO : mount: mount passed Jan 17 00:49:25.505709 ignition[928]: INFO : Ignition finished successfully Jan 17 00:49:25.493484 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:49:25.516741 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:49:25.549091 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:49:25.574502 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 17 00:49:25.581690 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:49:25.581731 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:49:25.588139 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:49:25.601665 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:49:25.610826 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:49:25.659852 ignition[958]: INFO : Ignition 2.19.0 Jan 17 00:49:25.659852 ignition[958]: INFO : Stage: files Jan 17 00:49:25.668012 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:49:25.668012 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:49:25.679470 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:49:25.684696 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:49:25.684696 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:49:25.703864 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:49:25.711822 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:49:25.719507 unknown[958]: wrote ssh authorized keys file for user: core Jan 17 00:49:25.725430 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:49:25.731721 systemd-networkd[784]: eth0: Gained IPv6LL Jan 17 00:49:25.744003 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:49:25.752045 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:49:25.803490 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:49:25.926136 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:49:25.988321 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:49:25.988321 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:49:25.988321 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:49:25.988321 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:49:25.988321 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:49:25.988321 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:49:25.988321 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:49:26.208717 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:49:26.955706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:49:26.955706 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:49:26.972198 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:49:26.981839 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:49:26.981839 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:49:26.981839 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 00:49:27.002541 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:49:27.015840 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:49:27.015840 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 00:49:27.015840 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:49:27.103300 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:49:27.115403 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:49:27.123481 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:49:27.123481 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:49:27.136409 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:49:27.143545 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:49:27.152108 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:49:27.152108 ignition[958]: INFO : files: files passed Jan 17 00:49:27.163512 ignition[958]: INFO : Ignition finished successfully Jan 17 00:49:27.170373 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:49:27.187771 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:49:27.194150 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:49:27.204985 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:49:27.205187 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:49:27.231126 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:49:27.223811 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:49:27.252498 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:49:27.252498 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:49:27.231837 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:49:27.270114 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:49:27.266791 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:49:27.306946 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:49:27.308412 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:49:27.322694 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:49:27.330018 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:49:27.337508 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:49:27.353436 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:49:27.381796 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:49:27.406662 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:49:27.421760 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:49:27.430324 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:49:27.439758 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:49:27.446832 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:49:27.451089 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:49:27.460635 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:49:27.468351 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:49:27.475343 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:49:27.485333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:49:27.496772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:49:27.506382 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:49:27.515833 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:49:27.526684 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:49:27.533459 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:49:27.542428 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:49:27.548881 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:49:27.549108 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:49:27.561029 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:49:27.570503 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:49:27.582005 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:49:27.586365 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:49:27.596793 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:49:27.601422 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:49:27.611827 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:49:27.616797 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:49:27.628298 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:49:27.637041 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:49:27.637824 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:49:27.658964 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:49:27.659313 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:49:27.679894 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:49:27.680131 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:49:27.693388 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:49:27.694041 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:49:27.706370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:49:27.706521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:49:27.715152 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:49:27.715435 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:49:27.745680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:49:27.745854 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:49:27.746082 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:49:27.754428 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:49:27.768378 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:49:27.797368 ignition[1012]: INFO : Ignition 2.19.0 Jan 17 00:49:27.797368 ignition[1012]: INFO : Stage: umount Jan 17 00:49:27.797368 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:49:27.797368 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:49:27.797368 ignition[1012]: INFO : umount: umount passed Jan 17 00:49:27.797368 ignition[1012]: INFO : Ignition finished successfully Jan 17 00:49:27.768676 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:49:27.782027 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:49:27.782299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:49:27.801870 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:49:27.802164 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:49:27.806776 systemd[1]: Stopped target network.target - Network. Jan 17 00:49:27.807307 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:49:27.807386 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:49:27.808035 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:49:27.808093 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:49:27.810092 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:49:27.810154 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:49:27.815958 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:49:27.816026 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:49:27.818524 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:49:27.819453 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:49:27.822142 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:49:27.822359 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:49:27.824936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:49:27.848118 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:49:27.848350 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:49:27.871280 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 17 00:49:27.871818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:49:27.871892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:49:27.883521 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:49:27.883764 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:49:27.899049 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:49:27.899129 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:49:27.953540 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:49:27.960858 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:49:27.960987 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:49:27.972167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:49:27.972350 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:49:27.983093 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:49:27.983186 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:49:27.996008 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:49:28.010672 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:49:28.010828 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:49:28.040977 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:49:28.041092 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:49:28.045545 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:49:28.045908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:49:28.056462 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:49:28.056529 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:49:28.067549 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:49:28.067657 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:49:28.071751 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:49:28.071807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:49:28.086161 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:49:28.086316 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:49:28.094928 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:49:28.094991 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:49:28.137049 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:49:28.143675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:49:28.143775 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:49:28.167526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:49:28.167686 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:49:28.171664 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:49:28.171870 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:49:28.233275 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:49:28.233444 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:49:28.245023 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:49:28.264733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:49:28.284545 systemd[1]: Switching root. Jan 17 00:49:28.327835 systemd-journald[195]: Journal stopped Jan 17 00:49:29.937893 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 17 00:49:29.937978 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:49:29.938006 kernel: SELinux: policy capability open_perms=1 Jan 17 00:49:29.938032 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:49:29.938055 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:49:29.938077 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:49:29.938095 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:49:29.938120 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:49:29.938138 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:49:29.938160 kernel: audit: type=1403 audit(1768610968.569:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:49:29.938179 systemd[1]: Successfully loaded SELinux policy in 60.388ms. Jan 17 00:49:29.938203 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.204ms. Jan 17 00:49:29.938301 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:49:29.938326 systemd[1]: Detected virtualization kvm. Jan 17 00:49:29.938339 systemd[1]: Detected architecture x86-64. Jan 17 00:49:29.938354 systemd[1]: Detected first boot. Jan 17 00:49:29.938364 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:49:29.938375 zram_generator::config[1056]: No configuration found. Jan 17 00:49:29.938387 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:49:29.938398 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:49:29.938408 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:49:29.938419 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:49:29.938430 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:49:29.938442 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:49:29.938455 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:49:29.938465 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:49:29.938476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:49:29.938487 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:49:29.938498 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:49:29.938508 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:49:29.938519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:49:29.938529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:49:29.938543 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:49:29.938555 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:49:29.938565 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:49:29.938614 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:49:29.938625 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:49:29.938636 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:49:29.938646 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:49:29.938657 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:49:29.938667 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:49:29.938682 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:49:29.938693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:49:29.938703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:49:29.938714 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:49:29.938724 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:49:29.938735 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:49:29.938745 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:49:29.938756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:49:29.938769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:49:29.938780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:49:29.938790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:49:29.938801 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:49:29.938812 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:49:29.938823 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:49:29.938834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:49:29.938844 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:49:29.938855 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:49:29.938868 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:49:29.938878 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:49:29.938889 systemd[1]: Reached target machines.target - Containers. Jan 17 00:49:29.938900 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:49:29.938910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:49:29.938921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:49:29.938932 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:49:29.938942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:49:29.938955 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:49:29.938966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:49:29.938976 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:49:29.938986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:49:29.938997 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:49:29.939008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:49:29.939019 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:49:29.939029 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:49:29.939040 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:49:29.939053 kernel: fuse: init (API version 7.39) Jan 17 00:49:29.939063 kernel: loop: module loaded Jan 17 00:49:29.939073 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:49:29.939084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:49:29.939095 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:49:29.939125 systemd-journald[1140]: Collecting audit messages is disabled. Jan 17 00:49:29.939146 systemd-journald[1140]: Journal started Jan 17 00:49:29.939167 systemd-journald[1140]: Runtime Journal (/run/log/journal/ce1824dc1b194e699c90d05eef535430) is 6.0M, max 48.4M, 42.3M free. Jan 17 00:49:29.373837 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:49:29.396313 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:49:29.397321 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:49:29.397901 systemd[1]: systemd-journald.service: Consumed 1.588s CPU time. Jan 17 00:49:29.945361 kernel: ACPI: bus type drm_connector registered Jan 17 00:49:29.945427 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:49:29.975010 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:49:29.975111 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:49:29.975143 systemd[1]: Stopped verity-setup.service. Jan 17 00:49:29.983362 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:49:29.992816 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:49:29.993647 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:49:29.998384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:49:30.002707 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:49:30.006742 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:49:30.011052 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:49:30.015615 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:49:30.019322 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:49:30.024608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:49:30.031038 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:49:30.031635 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:49:30.037150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:49:30.037446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:49:30.042126 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:49:30.042445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:49:30.047483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:49:30.047809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:49:30.053476 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:49:30.053864 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:49:30.059161 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:49:30.059489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:49:30.065026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:49:30.071043 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:49:30.076995 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:49:30.100850 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:49:30.117510 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:49:30.124157 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:49:30.128848 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:49:30.128922 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:49:30.134504 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:49:30.141414 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:49:30.148011 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:49:30.152461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:49:30.155742 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:49:30.162503 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:49:30.167441 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:49:30.169547 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:49:30.174054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:49:30.177719 systemd-journald[1140]: Time spent on flushing to /var/log/journal/ce1824dc1b194e699c90d05eef535430 is 18.837ms for 939 entries. Jan 17 00:49:30.177719 systemd-journald[1140]: System Journal (/var/log/journal/ce1824dc1b194e699c90d05eef535430) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:49:30.231500 systemd-journald[1140]: Received client request to flush runtime journal. Jan 17 00:49:30.231559 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:49:30.178366 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:49:30.189978 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:49:30.199792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:49:30.211894 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:49:30.225364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:49:30.233139 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:49:30.240112 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:49:30.246940 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:49:30.254455 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:49:30.262917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:49:30.278758 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:49:30.300287 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:49:30.300535 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:49:30.310627 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:49:30.320463 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:49:30.339498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:49:30.348347 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:49:30.353327 kernel: loop1: detected capacity change from 0 to 219144 Jan 17 00:49:30.353879 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:49:30.367354 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:49:30.399775 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 17 00:49:30.399826 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 17 00:49:30.409685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:49:30.428702 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 00:49:30.504304 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:49:30.529305 kernel: loop4: detected capacity change from 0 to 219144 Jan 17 00:49:30.553343 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:49:30.586272 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:49:30.587168 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 17 00:49:30.593303 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:49:30.593323 systemd[1]: Reloading... Jan 17 00:49:30.683358 zram_generator::config[1220]: No configuration found. Jan 17 00:49:30.766891 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:49:30.853213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:49:30.921705 systemd[1]: Reloading finished in 326 ms. Jan 17 00:49:30.963170 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:49:30.968410 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:49:30.974205 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:49:31.003460 systemd[1]: Starting ensure-sysext.service... Jan 17 00:49:31.008693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:49:31.015964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:49:31.024334 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:49:31.024383 systemd[1]: Reloading... Jan 17 00:49:31.040329 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:49:31.040870 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:49:31.042203 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:49:31.042752 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 17 00:49:31.042882 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 17 00:49:31.047981 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:49:31.047997 systemd-tmpfiles[1259]: Skipping /boot Jan 17 00:49:31.056384 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Jan 17 00:49:31.065102 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:49:31.066406 systemd-tmpfiles[1259]: Skipping /boot Jan 17 00:49:31.104281 zram_generator::config[1287]: No configuration found. Jan 17 00:49:31.209303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1316) Jan 17 00:49:31.249304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:49:31.249402 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:49:31.253302 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:49:31.259300 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:49:31.260805 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:49:31.288857 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:49:31.357449 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:49:31.435755 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:49:31.437149 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:49:31.443509 systemd[1]: Reloading finished in 418 ms. Jan 17 00:49:31.491297 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:49:31.513662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:49:31.536386 kernel: kvm_amd: TSC scaling supported Jan 17 00:49:31.536450 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:49:31.536468 kernel: kvm_amd: Nested Paging enabled Jan 17 00:49:31.539334 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:49:31.539368 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:49:31.597167 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:49:31.614364 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:49:31.634325 systemd[1]: Finished ensure-sysext.service. Jan 17 00:49:31.647172 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:49:31.671995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:49:31.691923 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:49:31.699664 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:49:31.707116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:49:31.718690 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:49:31.725950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:49:31.733702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:49:31.737340 lvm[1366]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:49:31.745515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:49:31.755517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:49:31.761103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:49:31.763660 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:49:31.771695 augenrules[1380]: No rules Jan 17 00:49:31.777661 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:49:31.787459 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:49:31.797635 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:49:31.803134 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:49:31.811434 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:49:31.818726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:49:31.823131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:49:31.824792 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:49:31.830096 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:49:31.835884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:49:31.836112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:49:31.841875 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:49:31.842075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:49:31.847315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:49:31.847534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:49:31.853206 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:49:31.853532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:49:31.858660 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:49:31.864903 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:49:31.871632 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:49:31.891449 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:49:31.899790 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:49:31.915275 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:49:31.919610 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:49:31.919710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:49:31.921357 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:49:31.926312 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:49:31.926815 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:49:32.084635 systemd-networkd[1386]: lo: Link UP Jan 17 00:49:32.084669 systemd-networkd[1386]: lo: Gained carrier Jan 17 00:49:32.087087 systemd-networkd[1386]: Enumeration completed Jan 17 00:49:32.088372 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:49:32.088430 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:49:32.088805 systemd-resolved[1388]: Positive Trust Anchors: Jan 17 00:49:32.089062 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:49:32.089137 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:49:32.090107 systemd-networkd[1386]: eth0: Link UP Jan 17 00:49:32.090116 systemd-networkd[1386]: eth0: Gained carrier Jan 17 00:49:32.090130 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:49:32.094059 systemd-resolved[1388]: Defaulting to hostname 'linux'. Jan 17 00:49:32.105411 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.159/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:49:32.106666 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Jan 17 00:49:33.465631 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:49:33.465780 systemd-timesyncd[1389]: Initial clock synchronization to Sat 2026-01-17 00:49:33.465364 UTC. Jan 17 00:49:33.465922 systemd-resolved[1388]: Clock change detected. Flushing caches. Jan 17 00:49:33.494005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:49:33.494644 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:49:33.510188 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:49:33.517261 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:49:33.523307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:49:33.528851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:49:33.534815 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:49:33.540224 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:49:33.547129 systemd[1]: Reached target network.target - Network. Jan 17 00:49:33.551106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:49:33.555650 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:49:33.560366 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:49:33.565548 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:49:33.571189 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:49:33.575883 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:49:33.575926 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:49:33.579501 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:49:33.583608 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:49:33.587823 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:49:33.592395 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:49:33.596930 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:49:33.602886 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:49:33.615820 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:49:33.622016 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:49:33.626459 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:49:33.630632 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:49:33.634387 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:49:33.637938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:49:33.638010 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:49:33.639834 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:49:33.644989 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:49:33.650146 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:49:33.657336 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:49:33.660584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:49:33.662467 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:49:33.663858 jq[1425]: false Jan 17 00:49:33.667536 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:49:33.673204 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:49:33.680444 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:49:33.692984 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:49:33.694844 extend-filesystems[1426]: Found loop3 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found loop4 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found loop5 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found sr0 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda1 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda2 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda3 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found usr Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda4 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda6 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda7 Jan 17 00:49:33.694844 extend-filesystems[1426]: Found vda9 Jan 17 00:49:33.694844 extend-filesystems[1426]: Checking size of /dev/vda9 Jan 17 00:49:33.798387 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1332) Jan 17 00:49:33.798446 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:49:33.704170 dbus-daemon[1424]: [system] SELinux support is enabled Jan 17 00:49:33.705993 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:49:33.799118 extend-filesystems[1426]: Resized partition /dev/vda9 Jan 17 00:49:33.707275 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:49:33.806119 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:49:33.748209 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:49:33.772345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:49:33.814991 update_engine[1441]: I20260117 00:49:33.796448 1441 main.cc:92] Flatcar Update Engine starting Jan 17 00:49:33.814991 update_engine[1441]: I20260117 00:49:33.800359 1441 update_check_scheduler.cc:74] Next update check in 5m17s Jan 17 00:49:33.780425 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:49:33.821205 jq[1448]: true Jan 17 00:49:33.794421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:49:33.794672 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:49:33.795238 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:49:33.795530 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:49:33.816214 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:49:33.816459 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:49:33.826286 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:49:33.836924 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:49:33.850552 jq[1451]: true Jan 17 00:49:33.854304 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:49:33.854304 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:49:33.854304 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:49:33.887942 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Jan 17 00:49:33.911341 tar[1450]: linux-amd64/LICENSE Jan 17 00:49:33.911341 tar[1450]: linux-amd64/helm Jan 17 00:49:33.856789 systemd-logind[1432]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:49:33.912250 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:49:33.856824 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:49:33.859112 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:49:33.859376 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:49:33.859570 systemd-logind[1432]: New seat seat0. Jan 17 00:49:33.878811 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:49:33.906352 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:49:33.915356 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:49:33.915588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:49:33.924409 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:49:33.931935 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:49:33.924587 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:49:33.941209 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:49:33.955870 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:49:33.965153 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:49:33.994630 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:49:33.999412 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:49:34.008232 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:49:34.011369 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:49:34.011880 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:49:34.036461 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:49:34.051897 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:49:34.069199 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:49:34.088471 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:49:34.094481 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:49:34.095594 containerd[1452]: time="2026-01-17T00:49:34.095472491Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:49:34.131146 containerd[1452]: time="2026-01-17T00:49:34.131092747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134170 containerd[1452]: time="2026-01-17T00:49:34.134096897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134170 containerd[1452]: time="2026-01-17T00:49:34.134156528Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:49:34.134257 containerd[1452]: time="2026-01-17T00:49:34.134181935Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:49:34.134429 containerd[1452]: time="2026-01-17T00:49:34.134386267Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:49:34.134464 containerd[1452]: time="2026-01-17T00:49:34.134430189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134547 containerd[1452]: time="2026-01-17T00:49:34.134508766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134574 containerd[1452]: time="2026-01-17T00:49:34.134546546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134879 containerd[1452]: time="2026-01-17T00:49:34.134840575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134914 containerd[1452]: time="2026-01-17T00:49:34.134879839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134914 containerd[1452]: time="2026-01-17T00:49:34.134895989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:49:34.134914 containerd[1452]: time="2026-01-17T00:49:34.134909314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:49:34.135108 containerd[1452]: time="2026-01-17T00:49:34.135013729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:49:34.135440 containerd[1452]: time="2026-01-17T00:49:34.135374392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:49:34.135583 containerd[1452]: time="2026-01-17T00:49:34.135542096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:49:34.135621 containerd[1452]: time="2026-01-17T00:49:34.135581089Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:49:34.135797 containerd[1452]: time="2026-01-17T00:49:34.135761515Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:49:34.135895 containerd[1452]: time="2026-01-17T00:49:34.135859157Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:49:34.142542 containerd[1452]: time="2026-01-17T00:49:34.142463470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:49:34.142634 containerd[1452]: time="2026-01-17T00:49:34.142563116Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:49:34.142634 containerd[1452]: time="2026-01-17T00:49:34.142585177Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:49:34.142634 containerd[1452]: time="2026-01-17T00:49:34.142604673Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:49:34.142634 containerd[1452]: time="2026-01-17T00:49:34.142622337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:49:34.142877 containerd[1452]: time="2026-01-17T00:49:34.142837508Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:49:34.143298 containerd[1452]: time="2026-01-17T00:49:34.143174197Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:49:34.143439 containerd[1452]: time="2026-01-17T00:49:34.143339396Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:49:34.143439 containerd[1452]: time="2026-01-17T00:49:34.143391153Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:49:34.143439 containerd[1452]: time="2026-01-17T00:49:34.143416009Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:49:34.143439 containerd[1452]: time="2026-01-17T00:49:34.143432970Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143540 containerd[1452]: time="2026-01-17T00:49:34.143448479Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143540 containerd[1452]: time="2026-01-17T00:49:34.143462816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143540 containerd[1452]: time="2026-01-17T00:49:34.143478095Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143540 containerd[1452]: time="2026-01-17T00:49:34.143494586Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143540 containerd[1452]: time="2026-01-17T00:49:34.143510355Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143540 containerd[1452]: time="2026-01-17T00:49:34.143530202Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143545200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143570387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143587699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143603338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143624198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143640568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143657039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143681 containerd[1452]: time="2026-01-17T00:49:34.143675904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143761474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143783054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143801779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143816406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143833709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143849128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143866179Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143889002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143903098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.143927 containerd[1452]: time="2026-01-17T00:49:34.143915762Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.143966156Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.143987456Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.144001462Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.144016260Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.144028863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.144090268Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.144136264Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:49:34.144198 containerd[1452]: time="2026-01-17T00:49:34.144150090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:49:34.146497 containerd[1452]: time="2026-01-17T00:49:34.145594878Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:49:34.146497 containerd[1452]: time="2026-01-17T00:49:34.145742723Z" level=info msg="Connect containerd service" Jan 17 00:49:34.146497 containerd[1452]: time="2026-01-17T00:49:34.145791785Z" level=info msg="using legacy CRI server" Jan 17 00:49:34.146497 containerd[1452]: time="2026-01-17T00:49:34.145801433Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:49:34.146497 containerd[1452]: time="2026-01-17T00:49:34.146424917Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:49:34.148274 containerd[1452]: time="2026-01-17T00:49:34.148224367Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:49:34.148622 containerd[1452]: time="2026-01-17T00:49:34.148540378Z" level=info msg="Start subscribing containerd event" Jan 17 00:49:34.148663 containerd[1452]: time="2026-01-17T00:49:34.148643931Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:49:34.148937 containerd[1452]: time="2026-01-17T00:49:34.148676632Z" level=info msg="Start recovering state" Jan 17 00:49:34.149030 containerd[1452]: time="2026-01-17T00:49:34.148995558Z" level=info msg="Start event monitor" Jan 17 00:49:34.149195 containerd[1452]: time="2026-01-17T00:49:34.149148543Z" level=info msg="Start snapshots syncer" Jan 17 00:49:34.149313 containerd[1452]: time="2026-01-17T00:49:34.149263918Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:49:34.149508 containerd[1452]: time="2026-01-17T00:49:34.149459063Z" level=info msg="Start streaming server" Jan 17 00:49:34.149828 containerd[1452]: time="2026-01-17T00:49:34.148800163Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:49:34.149968 containerd[1452]: time="2026-01-17T00:49:34.149903823Z" level=info msg="containerd successfully booted in 0.055866s" Jan 17 00:49:34.149995 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:49:34.377421 tar[1450]: linux-amd64/README.md Jan 17 00:49:34.400238 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:49:35.084321 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 17 00:49:35.088808 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:49:35.096500 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:49:35.112183 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:49:35.118151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:49:35.126494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:49:35.166834 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:49:35.167218 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:49:35.172870 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:49:35.178799 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:49:36.147688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:49:36.155045 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:49:36.161606 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:49:36.161890 systemd[1]: Startup finished in 1.788s (kernel) + 10.725s (initrd) + 6.296s (userspace) = 18.810s. Jan 17 00:49:36.284976 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:49:36.300778 systemd[1]: Started sshd@0-10.0.0.159:22-10.0.0.1:54362.service - OpenSSH per-connection server daemon (10.0.0.1:54362). Jan 17 00:49:36.379209 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 54362 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:49:36.385299 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:49:36.407420 systemd-logind[1432]: New session 1 of user core. Jan 17 00:49:36.409871 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:49:36.421674 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:49:36.440162 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:49:36.453180 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:49:36.462589 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:49:36.629781 systemd[1551]: Queued start job for default target default.target. Jan 17 00:49:36.646949 systemd[1551]: Created slice app.slice - User Application Slice. Jan 17 00:49:36.647028 systemd[1551]: Reached target paths.target - Paths. Jan 17 00:49:36.647051 systemd[1551]: Reached target timers.target - Timers. Jan 17 00:49:36.651671 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:49:36.672257 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:49:36.672492 systemd[1551]: Reached target sockets.target - Sockets. Jan 17 00:49:36.672518 systemd[1551]: Reached target basic.target - Basic System. Jan 17 00:49:36.672581 systemd[1551]: Reached target default.target - Main User Target. Jan 17 00:49:36.672638 systemd[1551]: Startup finished in 191ms. Jan 17 00:49:36.672937 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:49:36.681043 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:49:36.763196 systemd[1]: Started sshd@1-10.0.0.159:22-10.0.0.1:54372.service - OpenSSH per-connection server daemon (10.0.0.1:54372). Jan 17 00:49:36.813635 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 54372 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:49:36.816561 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:49:36.824193 systemd-logind[1432]: New session 2 of user core. Jan 17 00:49:36.824819 kubelet[1535]: E0117 00:49:36.824368 1535 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:49:36.838964 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:49:36.839386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:49:36.839681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:49:36.840245 systemd[1]: kubelet.service: Consumed 1.153s CPU time. Jan 17 00:49:36.907836 sshd[1563]: pam_unix(sshd:session): session closed for user core Jan 17 00:49:36.924904 systemd[1]: sshd@1-10.0.0.159:22-10.0.0.1:54372.service: Deactivated successfully. Jan 17 00:49:36.928605 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:49:36.932668 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:49:36.945332 systemd[1]: Started sshd@2-10.0.0.159:22-10.0.0.1:54376.service - OpenSSH per-connection server daemon (10.0.0.1:54376). Jan 17 00:49:36.948022 systemd-logind[1432]: Removed session 2. Jan 17 00:49:36.991214 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 54376 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:49:36.991685 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:49:37.003403 systemd-logind[1432]: New session 3 of user core. Jan 17 00:49:37.022364 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:49:37.079848 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 17 00:49:37.102395 systemd[1]: sshd@2-10.0.0.159:22-10.0.0.1:54376.service: Deactivated successfully. Jan 17 00:49:37.104637 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:49:37.107131 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:49:37.118291 systemd[1]: Started sshd@3-10.0.0.159:22-10.0.0.1:54386.service - OpenSSH per-connection server daemon (10.0.0.1:54386). Jan 17 00:49:37.120226 systemd-logind[1432]: Removed session 3. Jan 17 00:49:37.160934 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 54386 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:49:37.163302 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:49:37.170348 systemd-logind[1432]: New session 4 of user core. Jan 17 00:49:37.180184 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:49:37.244287 sshd[1579]: pam_unix(sshd:session): session closed for user core Jan 17 00:49:37.253866 systemd[1]: sshd@3-10.0.0.159:22-10.0.0.1:54386.service: Deactivated successfully. Jan 17 00:49:37.256436 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:49:37.258687 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:49:37.266526 systemd[1]: Started sshd@4-10.0.0.159:22-10.0.0.1:54402.service - OpenSSH per-connection server daemon (10.0.0.1:54402). Jan 17 00:49:37.268218 systemd-logind[1432]: Removed session 4. Jan 17 00:49:37.304024 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 54402 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:49:37.306427 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:49:37.314224 systemd-logind[1432]: New session 5 of user core. Jan 17 00:49:37.326005 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:49:37.399522 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:49:37.400169 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:49:37.425229 sudo[1589]: pam_unix(sudo:session): session closed for user root Jan 17 00:49:37.429947 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 17 00:49:37.442467 systemd[1]: sshd@4-10.0.0.159:22-10.0.0.1:54402.service: Deactivated successfully. Jan 17 00:49:37.445011 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:49:37.448149 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:49:37.464185 systemd[1]: Started sshd@5-10.0.0.159:22-10.0.0.1:54410.service - OpenSSH per-connection server daemon (10.0.0.1:54410). Jan 17 00:49:37.465891 systemd-logind[1432]: Removed session 5. Jan 17 00:49:37.511860 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 54410 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:49:37.514439 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:49:37.521218 systemd-logind[1432]: New session 6 of user core. Jan 17 00:49:37.535007 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:49:37.597643 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:49:37.598242 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:49:37.605049 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 17 00:49:37.614196 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:49:37.614619 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:49:37.643538 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:49:37.646477 auditctl[1601]: No rules Jan 17 00:49:37.647106 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:49:37.647401 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:49:37.652470 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:49:37.703655 augenrules[1619]: No rules Jan 17 00:49:37.706332 sudo[1597]: pam_unix(sudo:session): session closed for user root Jan 17 00:49:37.704498 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:49:37.709552 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 17 00:49:37.729580 systemd[1]: sshd@5-10.0.0.159:22-10.0.0.1:54410.service: Deactivated successfully. Jan 17 00:49:37.732459 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:49:37.740402 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:49:37.751571 systemd[1]: Started sshd@6-10.0.0.159:22-10.0.0.1:54426.service - OpenSSH per-connection server daemon (10.0.0.1:54426). Jan 17 00:49:37.753237 systemd-logind[1432]: Removed session 6. Jan 17 00:49:37.796765 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 54426 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:49:37.799332 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:49:37.806762 systemd-logind[1432]: New session 7 of user core. Jan 17 00:49:37.824314 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:49:37.887435 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:49:37.887986 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:49:38.346265 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:49:38.348042 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:49:38.759866 dockerd[1648]: time="2026-01-17T00:49:38.759594995Z" level=info msg="Starting up" Jan 17 00:49:39.103052 dockerd[1648]: time="2026-01-17T00:49:39.102939168Z" level=info msg="Loading containers: start." Jan 17 00:49:39.291825 kernel: Initializing XFRM netlink socket Jan 17 00:49:39.438409 systemd-networkd[1386]: docker0: Link UP Jan 17 00:49:39.469002 dockerd[1648]: time="2026-01-17T00:49:39.468669969Z" level=info msg="Loading containers: done." Jan 17 00:49:39.496474 dockerd[1648]: time="2026-01-17T00:49:39.496370851Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:49:39.496659 dockerd[1648]: time="2026-01-17T00:49:39.496578359Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:49:39.496908 dockerd[1648]: time="2026-01-17T00:49:39.496840559Z" level=info msg="Daemon has completed initialization" Jan 17 00:49:39.569337 dockerd[1648]: time="2026-01-17T00:49:39.567471992Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:49:39.569497 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:49:40.557587 containerd[1452]: time="2026-01-17T00:49:40.557381647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:49:41.160931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110223380.mount: Deactivated successfully. Jan 17 00:49:42.560022 containerd[1452]: time="2026-01-17T00:49:42.557994856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:42.560610 containerd[1452]: time="2026-01-17T00:49:42.560118996Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 17 00:49:42.561944 containerd[1452]: time="2026-01-17T00:49:42.561859175Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:42.566320 containerd[1452]: time="2026-01-17T00:49:42.566219504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:42.568335 containerd[1452]: time="2026-01-17T00:49:42.568219398Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.010764304s" Jan 17 00:49:42.568335 containerd[1452]: time="2026-01-17T00:49:42.568299308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:49:42.570044 containerd[1452]: time="2026-01-17T00:49:42.569212302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:49:43.896908 containerd[1452]: time="2026-01-17T00:49:43.896781584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:43.898578 containerd[1452]: time="2026-01-17T00:49:43.898442218Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 17 00:49:43.900292 containerd[1452]: time="2026-01-17T00:49:43.900218155Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:43.908341 containerd[1452]: time="2026-01-17T00:49:43.908219451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:43.910439 containerd[1452]: time="2026-01-17T00:49:43.910287655Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.341010282s" Jan 17 00:49:43.910591 containerd[1452]: time="2026-01-17T00:49:43.910460698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:49:43.912027 containerd[1452]: time="2026-01-17T00:49:43.911944198Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:49:45.018234 containerd[1452]: time="2026-01-17T00:49:45.016905424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:45.018234 containerd[1452]: time="2026-01-17T00:49:45.017976037Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 17 00:49:45.020025 containerd[1452]: time="2026-01-17T00:49:45.019953165Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:45.024836 containerd[1452]: time="2026-01-17T00:49:45.024785334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:45.027442 containerd[1452]: time="2026-01-17T00:49:45.027366033Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.114597084s" Jan 17 00:49:45.027497 containerd[1452]: time="2026-01-17T00:49:45.027455419Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:49:45.030218 containerd[1452]: time="2026-01-17T00:49:45.028055799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:49:46.241463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291960480.mount: Deactivated successfully. Jan 17 00:49:46.591117 containerd[1452]: time="2026-01-17T00:49:46.590967102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:46.595186 containerd[1452]: time="2026-01-17T00:49:46.595009620Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 17 00:49:46.596598 containerd[1452]: time="2026-01-17T00:49:46.596470139Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:46.599531 containerd[1452]: time="2026-01-17T00:49:46.599317855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:46.601485 containerd[1452]: time="2026-01-17T00:49:46.600820718Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.572646658s" Jan 17 00:49:46.601485 containerd[1452]: time="2026-01-17T00:49:46.600874018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:49:46.602194 containerd[1452]: time="2026-01-17T00:49:46.601991815Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:49:47.083316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:49:47.098272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:49:47.134770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305657201.mount: Deactivated successfully. Jan 17 00:49:47.314814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:49:47.333442 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:49:47.405954 kubelet[1883]: E0117 00:49:47.405850 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:49:47.413347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:49:47.413602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:49:48.878668 containerd[1452]: time="2026-01-17T00:49:48.878609667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:48.881202 containerd[1452]: time="2026-01-17T00:49:48.881070733Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 17 00:49:48.885797 containerd[1452]: time="2026-01-17T00:49:48.885651567Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:48.893349 containerd[1452]: time="2026-01-17T00:49:48.893203416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:48.895510 containerd[1452]: time="2026-01-17T00:49:48.895342599Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.293287245s" Jan 17 00:49:48.895510 containerd[1452]: time="2026-01-17T00:49:48.895418762Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:49:48.897488 containerd[1452]: time="2026-01-17T00:49:48.897182669Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:49:49.336424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635980601.mount: Deactivated successfully. Jan 17 00:49:49.351750 containerd[1452]: time="2026-01-17T00:49:49.351617975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:49.353615 containerd[1452]: time="2026-01-17T00:49:49.353504798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 17 00:49:49.354994 containerd[1452]: time="2026-01-17T00:49:49.354785049Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:49.362328 containerd[1452]: time="2026-01-17T00:49:49.362224058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:49.363939 containerd[1452]: time="2026-01-17T00:49:49.363830881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 466.618185ms" Jan 17 00:49:49.363939 containerd[1452]: time="2026-01-17T00:49:49.363891254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:49:49.364568 containerd[1452]: time="2026-01-17T00:49:49.364535545Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:49:49.934871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228585276.mount: Deactivated successfully. Jan 17 00:49:53.114472 containerd[1452]: time="2026-01-17T00:49:53.114280061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:53.116329 containerd[1452]: time="2026-01-17T00:49:53.116175753Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 17 00:49:53.118322 containerd[1452]: time="2026-01-17T00:49:53.118242355Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:53.123535 containerd[1452]: time="2026-01-17T00:49:53.123474292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:49:53.125765 containerd[1452]: time="2026-01-17T00:49:53.125624750Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.761056494s" Jan 17 00:49:53.125765 containerd[1452]: time="2026-01-17T00:49:53.125678771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:49:57.441058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:49:57.454155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:49:57.472427 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:49:57.472578 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:49:57.473059 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:49:57.492398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:49:57.531914 systemd[1]: Reloading requested from client PID 2031 ('systemctl') (unit session-7.scope)... Jan 17 00:49:57.531960 systemd[1]: Reloading... Jan 17 00:49:57.670798 zram_generator::config[2073]: No configuration found. Jan 17 00:49:57.840663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:49:57.947946 systemd[1]: Reloading finished in 415 ms. Jan 17 00:49:58.030620 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:49:58.035436 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:49:58.035818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:49:58.038592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:49:58.228601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:49:58.235588 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:49:58.311491 kubelet[2120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:49:58.311491 kubelet[2120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:49:58.311998 kubelet[2120]: I0117 00:49:58.311665 2120 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:49:59.383883 kubelet[2120]: I0117 00:49:59.383830 2120 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:49:59.383883 kubelet[2120]: I0117 00:49:59.383876 2120 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:49:59.386356 kubelet[2120]: I0117 00:49:59.386286 2120 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:49:59.386356 kubelet[2120]: I0117 00:49:59.386324 2120 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:49:59.386635 kubelet[2120]: I0117 00:49:59.386567 2120 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:49:59.418643 kubelet[2120]: E0117 00:49:59.418479 2120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:49:59.422239 kubelet[2120]: I0117 00:49:59.422196 2120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:49:59.427374 kubelet[2120]: E0117 00:49:59.427319 2120 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:49:59.427430 kubelet[2120]: I0117 00:49:59.427397 2120 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:49:59.434745 kubelet[2120]: I0117 00:49:59.434607 2120 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:49:59.435079 kubelet[2120]: I0117 00:49:59.434951 2120 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:49:59.435271 kubelet[2120]: I0117 00:49:59.435010 2120 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:49:59.435271 kubelet[2120]: I0117 00:49:59.435229 2120 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:49:59.435271 kubelet[2120]: I0117 00:49:59.435240 2120 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:49:59.435514 kubelet[2120]: I0117 00:49:59.435354 2120 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:49:59.439974 kubelet[2120]: I0117 00:49:59.439871 2120 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:49:59.441657 kubelet[2120]: I0117 00:49:59.441561 2120 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:49:59.441657 kubelet[2120]: I0117 00:49:59.441622 2120 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:49:59.441657 kubelet[2120]: I0117 00:49:59.441649 2120 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:49:59.441883 kubelet[2120]: I0117 00:49:59.441672 2120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:49:59.443778 kubelet[2120]: E0117 00:49:59.442597 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:49:59.443778 kubelet[2120]: E0117 00:49:59.442815 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:49:59.445310 kubelet[2120]: I0117 00:49:59.445292 2120 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:49:59.446071 kubelet[2120]: I0117 00:49:59.445993 2120 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:49:59.446071 kubelet[2120]: I0117 00:49:59.446052 2120 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:49:59.446219 kubelet[2120]: W0117 00:49:59.446154 2120 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:49:59.452079 kubelet[2120]: I0117 00:49:59.451848 2120 server.go:1262] "Started kubelet" Jan 17 00:49:59.452490 kubelet[2120]: I0117 00:49:59.452393 2120 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:49:59.452607 kubelet[2120]: I0117 00:49:59.452506 2120 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:49:59.454048 kubelet[2120]: I0117 00:49:59.453019 2120 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:49:59.454048 kubelet[2120]: I0117 00:49:59.453173 2120 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:49:59.454048 kubelet[2120]: I0117 00:49:59.453503 2120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:49:59.456599 kubelet[2120]: I0117 00:49:59.456557 2120 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:49:59.459600 kubelet[2120]: E0117 00:49:59.455999 2120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.159:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.159:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5e55602c4d1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:49:59.451798813 +0000 UTC m=+1.210485651,LastTimestamp:2026-01-17 00:49:59.451798813 +0000 UTC m=+1.210485651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:49:59.459600 kubelet[2120]: E0117 00:49:59.457861 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:49:59.459600 kubelet[2120]: I0117 00:49:59.457896 2120 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:49:59.459600 kubelet[2120]: E0117 00:49:59.458038 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="200ms" Jan 17 00:49:59.459600 kubelet[2120]: I0117 00:49:59.458060 2120 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:49:59.459600 kubelet[2120]: I0117 00:49:59.458148 2120 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:49:59.459600 kubelet[2120]: I0117 00:49:59.458166 2120 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:49:59.460564 kubelet[2120]: E0117 00:49:59.458481 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:49:59.460564 kubelet[2120]: I0117 00:49:59.459848 2120 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:49:59.460564 kubelet[2120]: I0117 00:49:59.459911 2120 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:49:59.463202 kubelet[2120]: I0117 00:49:59.462471 2120 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:49:59.463896 kubelet[2120]: E0117 00:49:59.463846 2120 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:49:59.489787 kubelet[2120]: I0117 00:49:59.489131 2120 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:49:59.489787 kubelet[2120]: I0117 00:49:59.489155 2120 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:49:59.489787 kubelet[2120]: I0117 00:49:59.489173 2120 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:49:59.491066 kubelet[2120]: I0117 00:49:59.490486 2120 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:49:59.493410 kubelet[2120]: I0117 00:49:59.493350 2120 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:49:59.493410 kubelet[2120]: I0117 00:49:59.493400 2120 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:49:59.493508 kubelet[2120]: I0117 00:49:59.493421 2120 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:49:59.493508 kubelet[2120]: E0117 00:49:59.493457 2120 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:49:59.495647 kubelet[2120]: I0117 00:49:59.493800 2120 policy_none.go:49] "None policy: Start" Jan 17 00:49:59.495647 kubelet[2120]: I0117 00:49:59.493817 2120 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:49:59.495647 kubelet[2120]: I0117 00:49:59.493834 2120 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:49:59.495647 kubelet[2120]: E0117 00:49:59.495191 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:49:59.496258 kubelet[2120]: I0117 00:49:59.496242 2120 policy_none.go:47] "Start" Jan 17 00:49:59.502514 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:49:59.523067 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:49:59.527528 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:49:59.545327 kubelet[2120]: E0117 00:49:59.545077 2120 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:49:59.545515 kubelet[2120]: I0117 00:49:59.545464 2120 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:49:59.545553 kubelet[2120]: I0117 00:49:59.545481 2120 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:49:59.546353 kubelet[2120]: I0117 00:49:59.546207 2120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:49:59.547571 kubelet[2120]: E0117 00:49:59.547539 2120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:49:59.547793 kubelet[2120]: E0117 00:49:59.547644 2120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:49:59.609773 systemd[1]: Created slice kubepods-burstable-podfcb9c55cb7d6f782ead6bddb67fb525d.slice - libcontainer container kubepods-burstable-podfcb9c55cb7d6f782ead6bddb67fb525d.slice. Jan 17 00:49:59.631248 kubelet[2120]: E0117 00:49:59.631161 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:49:59.633877 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 17 00:49:59.643392 kubelet[2120]: E0117 00:49:59.643341 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:49:59.646992 kubelet[2120]: I0117 00:49:59.646840 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:49:59.647393 kubelet[2120]: E0117 00:49:59.647188 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Jan 17 00:49:59.648030 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 17 00:49:59.651181 kubelet[2120]: E0117 00:49:59.650968 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:49:59.658620 kubelet[2120]: E0117 00:49:59.658545 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="400ms" Jan 17 00:49:59.758637 kubelet[2120]: I0117 00:49:59.758526 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb9c55cb7d6f782ead6bddb67fb525d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb9c55cb7d6f782ead6bddb67fb525d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:49:59.758637 kubelet[2120]: I0117 00:49:59.758641 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb9c55cb7d6f782ead6bddb67fb525d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb9c55cb7d6f782ead6bddb67fb525d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:49:59.758932 kubelet[2120]: I0117 00:49:59.758671 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:49:59.758932 kubelet[2120]: I0117 00:49:59.758760 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:49:59.758932 kubelet[2120]: I0117 00:49:59.758783 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb9c55cb7d6f782ead6bddb67fb525d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fcb9c55cb7d6f782ead6bddb67fb525d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:49:59.759021 kubelet[2120]: I0117 00:49:59.758896 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:49:59.759021 kubelet[2120]: I0117 00:49:59.758974 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:49:59.759021 kubelet[2120]: I0117 00:49:59.759010 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:49:59.759161 kubelet[2120]: I0117 00:49:59.759040 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:49:59.850435 kubelet[2120]: I0117 00:49:59.850247 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:49:59.850774 kubelet[2120]: E0117 00:49:59.850668 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Jan 17 00:49:59.936534 kubelet[2120]: E0117 00:49:59.936339 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:49:59.938253 containerd[1452]: time="2026-01-17T00:49:59.938145582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fcb9c55cb7d6f782ead6bddb67fb525d,Namespace:kube-system,Attempt:0,}" Jan 17 00:49:59.947610 kubelet[2120]: E0117 00:49:59.947514 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:49:59.948301 containerd[1452]: time="2026-01-17T00:49:59.948191422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 17 00:49:59.954313 kubelet[2120]: E0117 00:49:59.954156 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:49:59.954805 containerd[1452]: time="2026-01-17T00:49:59.954674369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 17 00:50:00.060052 kubelet[2120]: E0117 00:50:00.059995 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="800ms" Jan 17 00:50:00.253775 kubelet[2120]: I0117 00:50:00.253490 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:50:00.254272 kubelet[2120]: E0117 00:50:00.254190 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Jan 17 00:50:00.371034 kubelet[2120]: E0117 00:50:00.370933 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:50:00.375412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509117635.mount: Deactivated successfully. Jan 17 00:50:00.382153 containerd[1452]: time="2026-01-17T00:50:00.382013366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:50:00.386279 containerd[1452]: time="2026-01-17T00:50:00.386167323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:50:00.387753 containerd[1452]: time="2026-01-17T00:50:00.387576099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:50:00.389034 containerd[1452]: time="2026-01-17T00:50:00.388923966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:50:00.390450 containerd[1452]: time="2026-01-17T00:50:00.390374190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:50:00.391804 containerd[1452]: time="2026-01-17T00:50:00.391645613Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:50:00.392774 containerd[1452]: time="2026-01-17T00:50:00.392741945Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:50:00.395974 containerd[1452]: time="2026-01-17T00:50:00.395864332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 441.012551ms" Jan 17 00:50:00.397850 containerd[1452]: time="2026-01-17T00:50:00.397784802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.502765ms" Jan 17 00:50:00.398881 containerd[1452]: time="2026-01-17T00:50:00.398618062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:50:00.402436 containerd[1452]: time="2026-01-17T00:50:00.402326502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.047718ms" Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.542792439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.542856178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.542874673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.543036345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.543900493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.544133708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.544156140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:00.544385 containerd[1452]: time="2026-01-17T00:50:00.544273359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:00.549020 containerd[1452]: time="2026-01-17T00:50:00.548582727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:00.551994 containerd[1452]: time="2026-01-17T00:50:00.551861950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:00.554502 containerd[1452]: time="2026-01-17T00:50:00.551975421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:00.554502 containerd[1452]: time="2026-01-17T00:50:00.552510531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:00.578398 systemd[1]: Started cri-containerd-296842b89efa38393c3cb0b94dce7d773559f261bb8fa1f8316b5791118eb8b7.scope - libcontainer container 296842b89efa38393c3cb0b94dce7d773559f261bb8fa1f8316b5791118eb8b7. Jan 17 00:50:00.585024 systemd[1]: Started cri-containerd-4b36c86277a9d2b4a43aa3844f3e1ddd4a839fdef2d779c611072738de3a5b26.scope - libcontainer container 4b36c86277a9d2b4a43aa3844f3e1ddd4a839fdef2d779c611072738de3a5b26. Jan 17 00:50:00.635342 systemd[1]: Started cri-containerd-a54f26ecf30c0f79f00c0198c0c13793a789046d9101c5799ae0300d8fb641c9.scope - libcontainer container a54f26ecf30c0f79f00c0198c0c13793a789046d9101c5799ae0300d8fb641c9. Jan 17 00:50:00.647429 containerd[1452]: time="2026-01-17T00:50:00.647362693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fcb9c55cb7d6f782ead6bddb67fb525d,Namespace:kube-system,Attempt:0,} returns sandbox id \"296842b89efa38393c3cb0b94dce7d773559f261bb8fa1f8316b5791118eb8b7\"" Jan 17 00:50:00.650786 kubelet[2120]: E0117 00:50:00.650656 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:00.665427 containerd[1452]: time="2026-01-17T00:50:00.663991665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b36c86277a9d2b4a43aa3844f3e1ddd4a839fdef2d779c611072738de3a5b26\"" Jan 17 00:50:00.666358 containerd[1452]: time="2026-01-17T00:50:00.665635354Z" level=info msg="CreateContainer within sandbox \"296842b89efa38393c3cb0b94dce7d773559f261bb8fa1f8316b5791118eb8b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:50:00.667991 kubelet[2120]: E0117 00:50:00.667916 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:00.676950 containerd[1452]: time="2026-01-17T00:50:00.676826211Z" level=info msg="CreateContainer within sandbox \"4b36c86277a9d2b4a43aa3844f3e1ddd4a839fdef2d779c611072738de3a5b26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:50:00.707545 containerd[1452]: time="2026-01-17T00:50:00.707349749Z" level=info msg="CreateContainer within sandbox \"296842b89efa38393c3cb0b94dce7d773559f261bb8fa1f8316b5791118eb8b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9387fd794ab97c4bdd1822e22076707d6a1b0a333a14ad26a33f1a8dc298d892\"" Jan 17 00:50:00.707975 containerd[1452]: time="2026-01-17T00:50:00.707857794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"a54f26ecf30c0f79f00c0198c0c13793a789046d9101c5799ae0300d8fb641c9\"" Jan 17 00:50:00.708518 containerd[1452]: time="2026-01-17T00:50:00.708361404Z" level=info msg="StartContainer for \"9387fd794ab97c4bdd1822e22076707d6a1b0a333a14ad26a33f1a8dc298d892\"" Jan 17 00:50:00.708887 kubelet[2120]: E0117 00:50:00.708836 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:00.714076 containerd[1452]: time="2026-01-17T00:50:00.714020181Z" level=info msg="CreateContainer within sandbox \"4b36c86277a9d2b4a43aa3844f3e1ddd4a839fdef2d779c611072738de3a5b26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b212b1f7ae7283464e18313f1e2740a56067ed4e11ac0d5a94939d4e6e9ee200\"" Jan 17 00:50:00.715447 containerd[1452]: time="2026-01-17T00:50:00.715371654Z" level=info msg="StartContainer for \"b212b1f7ae7283464e18313f1e2740a56067ed4e11ac0d5a94939d4e6e9ee200\"" Jan 17 00:50:00.716289 containerd[1452]: time="2026-01-17T00:50:00.716186495Z" level=info msg="CreateContainer within sandbox \"a54f26ecf30c0f79f00c0198c0c13793a789046d9101c5799ae0300d8fb641c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:50:00.748661 containerd[1452]: time="2026-01-17T00:50:00.748553476Z" level=info msg="CreateContainer within sandbox \"a54f26ecf30c0f79f00c0198c0c13793a789046d9101c5799ae0300d8fb641c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2dc5ac67aacb9fcb9c1d1fff5d9fa9f4a8e19b690d95b8f1f2a8feec6016deea\"" Jan 17 00:50:00.749808 containerd[1452]: time="2026-01-17T00:50:00.749765209Z" level=info msg="StartContainer for \"2dc5ac67aacb9fcb9c1d1fff5d9fa9f4a8e19b690d95b8f1f2a8feec6016deea\"" Jan 17 00:50:00.758261 systemd[1]: Started cri-containerd-9387fd794ab97c4bdd1822e22076707d6a1b0a333a14ad26a33f1a8dc298d892.scope - libcontainer container 9387fd794ab97c4bdd1822e22076707d6a1b0a333a14ad26a33f1a8dc298d892. Jan 17 00:50:00.769061 systemd[1]: Started cri-containerd-b212b1f7ae7283464e18313f1e2740a56067ed4e11ac0d5a94939d4e6e9ee200.scope - libcontainer container b212b1f7ae7283464e18313f1e2740a56067ed4e11ac0d5a94939d4e6e9ee200. Jan 17 00:50:00.778565 kubelet[2120]: E0117 00:50:00.778441 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:50:00.801074 systemd[1]: Started cri-containerd-2dc5ac67aacb9fcb9c1d1fff5d9fa9f4a8e19b690d95b8f1f2a8feec6016deea.scope - libcontainer container 2dc5ac67aacb9fcb9c1d1fff5d9fa9f4a8e19b690d95b8f1f2a8feec6016deea. Jan 17 00:50:00.846939 containerd[1452]: time="2026-01-17T00:50:00.845654059Z" level=info msg="StartContainer for \"9387fd794ab97c4bdd1822e22076707d6a1b0a333a14ad26a33f1a8dc298d892\" returns successfully" Jan 17 00:50:00.860047 containerd[1452]: time="2026-01-17T00:50:00.859562359Z" level=info msg="StartContainer for \"b212b1f7ae7283464e18313f1e2740a56067ed4e11ac0d5a94939d4e6e9ee200\" returns successfully" Jan 17 00:50:00.868893 kubelet[2120]: E0117 00:50:00.868572 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="1.6s" Jan 17 00:50:00.875942 kubelet[2120]: E0117 00:50:00.875788 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:50:00.895354 containerd[1452]: time="2026-01-17T00:50:00.893776338Z" level=info msg="StartContainer for \"2dc5ac67aacb9fcb9c1d1fff5d9fa9f4a8e19b690d95b8f1f2a8feec6016deea\" returns successfully" Jan 17 00:50:01.057437 kubelet[2120]: I0117 00:50:01.056838 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:50:01.507490 kubelet[2120]: E0117 00:50:01.507249 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:50:01.507490 kubelet[2120]: E0117 00:50:01.507389 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:01.512005 kubelet[2120]: E0117 00:50:01.511982 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:50:01.512688 kubelet[2120]: E0117 00:50:01.512409 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:01.518280 kubelet[2120]: E0117 00:50:01.518244 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:50:01.519433 kubelet[2120]: E0117 00:50:01.519233 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:02.491379 kubelet[2120]: E0117 00:50:02.491323 2120 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 00:50:02.506434 kubelet[2120]: I0117 00:50:02.506355 2120 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:50:02.506434 kubelet[2120]: E0117 00:50:02.506410 2120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 17 00:50:02.521359 kubelet[2120]: E0117 00:50:02.521086 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:50:02.521359 kubelet[2120]: E0117 00:50:02.521300 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:02.524021 kubelet[2120]: E0117 00:50:02.523551 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:50:02.524021 kubelet[2120]: E0117 00:50:02.523670 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:02.536365 kubelet[2120]: E0117 00:50:02.536235 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:50:02.536477 kubelet[2120]: E0117 00:50:02.536437 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:02.542663 kubelet[2120]: E0117 00:50:02.542453 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:50:02.643646 kubelet[2120]: E0117 00:50:02.643505 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:50:02.744863 kubelet[2120]: E0117 00:50:02.744635 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:50:02.845272 kubelet[2120]: E0117 00:50:02.845200 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:50:02.946375 kubelet[2120]: E0117 00:50:02.946235 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:50:03.046997 kubelet[2120]: E0117 00:50:03.046673 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:50:03.146979 kubelet[2120]: E0117 00:50:03.146915 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:50:03.258623 kubelet[2120]: I0117 00:50:03.258464 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:03.265593 kubelet[2120]: E0117 00:50:03.265447 2120 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:03.265593 kubelet[2120]: I0117 00:50:03.265476 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:03.267019 kubelet[2120]: E0117 00:50:03.266958 2120 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:03.267019 kubelet[2120]: I0117 00:50:03.267006 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:50:03.270174 kubelet[2120]: E0117 00:50:03.270034 2120 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:50:03.446879 kubelet[2120]: I0117 00:50:03.446808 2120 apiserver.go:52] "Watching apiserver" Jan 17 00:50:03.458784 kubelet[2120]: I0117 00:50:03.458631 2120 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:50:03.728540 kubelet[2120]: I0117 00:50:03.728061 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:03.735335 kubelet[2120]: E0117 00:50:03.735200 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:04.524756 kubelet[2120]: E0117 00:50:04.524644 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:04.700292 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-7.scope)... Jan 17 00:50:04.700336 systemd[1]: Reloading... Jan 17 00:50:04.786769 zram_generator::config[2451]: No configuration found. Jan 17 00:50:04.928063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:50:05.017647 systemd[1]: Reloading finished in 316 ms. Jan 17 00:50:05.081478 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:50:05.102320 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:50:05.102648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:50:05.102806 systemd[1]: kubelet.service: Consumed 2.014s CPU time, 126.5M memory peak, 0B memory swap peak. Jan 17 00:50:05.116065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:50:05.287312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:50:05.293008 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:50:05.357927 kubelet[2496]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:50:05.357927 kubelet[2496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:50:05.358445 kubelet[2496]: I0117 00:50:05.357980 2496 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:50:05.369637 kubelet[2496]: I0117 00:50:05.369524 2496 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:50:05.369637 kubelet[2496]: I0117 00:50:05.369578 2496 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:50:05.369637 kubelet[2496]: I0117 00:50:05.369613 2496 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:50:05.369637 kubelet[2496]: I0117 00:50:05.369627 2496 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:50:05.369895 kubelet[2496]: I0117 00:50:05.369883 2496 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:50:05.371462 kubelet[2496]: I0117 00:50:05.371398 2496 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:50:05.374180 kubelet[2496]: I0117 00:50:05.374049 2496 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:50:05.379918 kubelet[2496]: E0117 00:50:05.379885 2496 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:50:05.380072 kubelet[2496]: I0117 00:50:05.380030 2496 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:50:05.388576 kubelet[2496]: I0117 00:50:05.388525 2496 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:50:05.388984 kubelet[2496]: I0117 00:50:05.388896 2496 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:50:05.389086 kubelet[2496]: I0117 00:50:05.388950 2496 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:50:05.389086 kubelet[2496]: I0117 00:50:05.389064 2496 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:50:05.389086 kubelet[2496]: I0117 00:50:05.389073 2496 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:50:05.389318 kubelet[2496]: I0117 00:50:05.389138 2496 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:50:05.390429 kubelet[2496]: I0117 00:50:05.390345 2496 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:50:05.390949 kubelet[2496]: I0117 00:50:05.390883 2496 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:50:05.390949 kubelet[2496]: I0117 00:50:05.390924 2496 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:50:05.390949 kubelet[2496]: I0117 00:50:05.390951 2496 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:50:05.391214 kubelet[2496]: I0117 00:50:05.390978 2496 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:50:05.394300 kubelet[2496]: I0117 00:50:05.394023 2496 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:50:05.397543 kubelet[2496]: I0117 00:50:05.395000 2496 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:50:05.397543 kubelet[2496]: I0117 00:50:05.395168 2496 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:50:05.408051 kubelet[2496]: I0117 00:50:05.407955 2496 server.go:1262] "Started kubelet" Jan 17 00:50:05.410843 kubelet[2496]: I0117 00:50:05.409875 2496 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:50:05.410843 kubelet[2496]: I0117 00:50:05.409991 2496 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:50:05.410843 kubelet[2496]: I0117 00:50:05.410306 2496 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:50:05.410843 kubelet[2496]: I0117 00:50:05.410379 2496 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:50:05.414090 kubelet[2496]: I0117 00:50:05.414022 2496 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:50:05.414368 kubelet[2496]: I0117 00:50:05.414337 2496 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:50:05.417155 kubelet[2496]: I0117 00:50:05.415326 2496 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:50:05.417155 kubelet[2496]: I0117 00:50:05.415417 2496 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:50:05.417155 kubelet[2496]: I0117 00:50:05.415543 2496 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:50:05.417155 kubelet[2496]: I0117 00:50:05.415935 2496 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:50:05.420355 kubelet[2496]: I0117 00:50:05.420291 2496 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:50:05.420466 kubelet[2496]: I0117 00:50:05.420424 2496 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:50:05.423672 kubelet[2496]: E0117 00:50:05.423623 2496 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:50:05.424269 kubelet[2496]: I0117 00:50:05.424228 2496 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:50:05.439607 kubelet[2496]: I0117 00:50:05.439549 2496 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:50:05.442352 kubelet[2496]: I0117 00:50:05.441864 2496 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:50:05.442352 kubelet[2496]: I0117 00:50:05.441884 2496 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:50:05.442352 kubelet[2496]: I0117 00:50:05.441909 2496 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:50:05.442352 kubelet[2496]: E0117 00:50:05.441956 2496 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:50:05.484363 kubelet[2496]: I0117 00:50:05.484282 2496 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:50:05.484363 kubelet[2496]: I0117 00:50:05.484336 2496 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:50:05.484363 kubelet[2496]: I0117 00:50:05.484359 2496 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:50:05.484584 kubelet[2496]: I0117 00:50:05.484511 2496 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:50:05.484584 kubelet[2496]: I0117 00:50:05.484560 2496 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:50:05.484584 kubelet[2496]: I0117 00:50:05.484582 2496 policy_none.go:49] "None policy: Start" Jan 17 00:50:05.484661 kubelet[2496]: I0117 00:50:05.484594 2496 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:50:05.484661 kubelet[2496]: I0117 00:50:05.484607 2496 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:50:05.484822 kubelet[2496]: I0117 00:50:05.484794 2496 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:50:05.484822 kubelet[2496]: I0117 00:50:05.484811 2496 policy_none.go:47] "Start" Jan 17 00:50:05.492062 kubelet[2496]: E0117 00:50:05.491985 2496 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:50:05.492352 kubelet[2496]: I0117 00:50:05.492290 2496 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:50:05.492396 kubelet[2496]: I0117 00:50:05.492334 2496 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:50:05.493683 kubelet[2496]: I0117 00:50:05.493663 2496 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:50:05.494880 kubelet[2496]: E0117 00:50:05.494687 2496 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:50:05.544622 kubelet[2496]: I0117 00:50:05.543611 2496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:05.544622 kubelet[2496]: I0117 00:50:05.543646 2496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:05.544622 kubelet[2496]: I0117 00:50:05.544376 2496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:50:05.558991 kubelet[2496]: E0117 00:50:05.558869 2496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:05.604595 kubelet[2496]: I0117 00:50:05.604426 2496 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:50:05.617447 kubelet[2496]: I0117 00:50:05.617262 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb9c55cb7d6f782ead6bddb67fb525d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb9c55cb7d6f782ead6bddb67fb525d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:05.617447 kubelet[2496]: I0117 00:50:05.617322 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb9c55cb7d6f782ead6bddb67fb525d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fcb9c55cb7d6f782ead6bddb67fb525d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:05.617447 kubelet[2496]: I0117 00:50:05.617345 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:05.617447 kubelet[2496]: I0117 00:50:05.617364 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:50:05.617447 kubelet[2496]: I0117 00:50:05.617384 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb9c55cb7d6f782ead6bddb67fb525d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb9c55cb7d6f782ead6bddb67fb525d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:05.617634 kubelet[2496]: I0117 00:50:05.617402 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:05.617634 kubelet[2496]: I0117 00:50:05.617420 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:05.617634 kubelet[2496]: I0117 00:50:05.617437 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:05.617634 kubelet[2496]: I0117 00:50:05.617457 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:05.617922 kubelet[2496]: I0117 00:50:05.617847 2496 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:50:05.617922 kubelet[2496]: I0117 00:50:05.617915 2496 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:50:05.862490 kubelet[2496]: E0117 00:50:05.860209 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:05.862490 kubelet[2496]: E0117 00:50:05.861573 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:05.862490 kubelet[2496]: E0117 00:50:05.861864 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:06.394465 kubelet[2496]: I0117 00:50:06.394060 2496 apiserver.go:52] "Watching apiserver" Jan 17 00:50:06.415574 kubelet[2496]: I0117 00:50:06.415496 2496 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:50:06.462236 kubelet[2496]: I0117 00:50:06.461979 2496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:50:06.464633 kubelet[2496]: I0117 00:50:06.464414 2496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:06.464633 kubelet[2496]: I0117 00:50:06.464511 2496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:06.476015 kubelet[2496]: E0117 00:50:06.475981 2496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:50:06.476311 kubelet[2496]: E0117 00:50:06.476217 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:06.478661 kubelet[2496]: E0117 00:50:06.478605 2496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:50:06.478880 kubelet[2496]: E0117 00:50:06.478827 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:06.480754 kubelet[2496]: E0117 00:50:06.479194 2496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:50:06.480754 kubelet[2496]: E0117 00:50:06.479685 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:06.511685 kubelet[2496]: I0117 00:50:06.511440 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.511426681 podStartE2EDuration="1.511426681s" podCreationTimestamp="2026-01-17 00:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:50:06.49938233 +0000 UTC m=+1.201195205" watchObservedRunningTime="2026-01-17 00:50:06.511426681 +0000 UTC m=+1.213239577" Jan 17 00:50:06.523486 kubelet[2496]: I0117 00:50:06.523200 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.52318356 podStartE2EDuration="3.52318356s" podCreationTimestamp="2026-01-17 00:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:50:06.512165798 +0000 UTC m=+1.213978724" watchObservedRunningTime="2026-01-17 00:50:06.52318356 +0000 UTC m=+1.224996436" Jan 17 00:50:06.534964 kubelet[2496]: I0117 00:50:06.534804 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.534791316 podStartE2EDuration="1.534791316s" podCreationTimestamp="2026-01-17 00:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:50:06.523346069 +0000 UTC m=+1.225158965" watchObservedRunningTime="2026-01-17 00:50:06.534791316 +0000 UTC m=+1.236604193" Jan 17 00:50:07.466143 kubelet[2496]: E0117 00:50:07.465941 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:07.466143 kubelet[2496]: E0117 00:50:07.465941 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:07.466143 kubelet[2496]: E0117 00:50:07.466123 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:08.468783 kubelet[2496]: E0117 00:50:08.468586 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:08.470152 kubelet[2496]: E0117 00:50:08.470095 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:08.679241 kubelet[2496]: E0117 00:50:08.679154 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:11.539681 kubelet[2496]: E0117 00:50:11.539564 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:11.765376 kubelet[2496]: I0117 00:50:11.765297 2496 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:50:11.766011 containerd[1452]: time="2026-01-17T00:50:11.765882447Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:50:11.767323 kubelet[2496]: I0117 00:50:11.766263 2496 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:50:12.015204 systemd[1]: Created slice kubepods-besteffort-podffe1a08e_4ec3_4a81_8d73_35d206d5032f.slice - libcontainer container kubepods-besteffort-podffe1a08e_4ec3_4a81_8d73_35d206d5032f.slice. Jan 17 00:50:12.161268 kubelet[2496]: I0117 00:50:12.161136 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe1a08e-4ec3-4a81-8d73-35d206d5032f-lib-modules\") pod \"kube-proxy-hqwtk\" (UID: \"ffe1a08e-4ec3-4a81-8d73-35d206d5032f\") " pod="kube-system/kube-proxy-hqwtk" Jan 17 00:50:12.161268 kubelet[2496]: I0117 00:50:12.161212 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wmc\" (UniqueName: \"kubernetes.io/projected/ffe1a08e-4ec3-4a81-8d73-35d206d5032f-kube-api-access-f5wmc\") pod \"kube-proxy-hqwtk\" (UID: \"ffe1a08e-4ec3-4a81-8d73-35d206d5032f\") " pod="kube-system/kube-proxy-hqwtk" Jan 17 00:50:12.161465 kubelet[2496]: I0117 00:50:12.161341 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffe1a08e-4ec3-4a81-8d73-35d206d5032f-kube-proxy\") pod \"kube-proxy-hqwtk\" (UID: \"ffe1a08e-4ec3-4a81-8d73-35d206d5032f\") " pod="kube-system/kube-proxy-hqwtk" Jan 17 00:50:12.161465 kubelet[2496]: I0117 00:50:12.161399 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffe1a08e-4ec3-4a81-8d73-35d206d5032f-xtables-lock\") pod \"kube-proxy-hqwtk\" (UID: \"ffe1a08e-4ec3-4a81-8d73-35d206d5032f\") " pod="kube-system/kube-proxy-hqwtk" Jan 17 00:50:12.271863 kubelet[2496]: E0117 00:50:12.271551 2496 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:50:12.271863 kubelet[2496]: E0117 00:50:12.271630 2496 projected.go:196] Error preparing data for projected volume kube-api-access-f5wmc for pod kube-system/kube-proxy-hqwtk: configmap "kube-root-ca.crt" not found Jan 17 00:50:12.271863 kubelet[2496]: E0117 00:50:12.271847 2496 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffe1a08e-4ec3-4a81-8d73-35d206d5032f-kube-api-access-f5wmc podName:ffe1a08e-4ec3-4a81-8d73-35d206d5032f nodeName:}" failed. No retries permitted until 2026-01-17 00:50:12.771817056 +0000 UTC m=+7.473629932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f5wmc" (UniqueName: "kubernetes.io/projected/ffe1a08e-4ec3-4a81-8d73-35d206d5032f-kube-api-access-f5wmc") pod "kube-proxy-hqwtk" (UID: "ffe1a08e-4ec3-4a81-8d73-35d206d5032f") : configmap "kube-root-ca.crt" not found Jan 17 00:50:12.477969 kubelet[2496]: E0117 00:50:12.477821 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:12.932137 kubelet[2496]: E0117 00:50:12.931396 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:12.932540 containerd[1452]: time="2026-01-17T00:50:12.932382507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqwtk,Uid:ffe1a08e-4ec3-4a81-8d73-35d206d5032f,Namespace:kube-system,Attempt:0,}" Jan 17 00:50:12.934523 systemd[1]: Created slice kubepods-besteffort-pod3410893e_7dea_4e69_a316_ae0a20425806.slice - libcontainer container kubepods-besteffort-pod3410893e_7dea_4e69_a316_ae0a20425806.slice. Jan 17 00:50:12.964492 containerd[1452]: time="2026-01-17T00:50:12.964232671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:12.964492 containerd[1452]: time="2026-01-17T00:50:12.964340861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:12.964492 containerd[1452]: time="2026-01-17T00:50:12.964364113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:12.964679 containerd[1452]: time="2026-01-17T00:50:12.964532434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:13.003989 systemd[1]: Started cri-containerd-4eb6a8a83ff0deb754ae7a4a61a468b28d838d35c68036f60fd6200dcb5b531b.scope - libcontainer container 4eb6a8a83ff0deb754ae7a4a61a468b28d838d35c68036f60fd6200dcb5b531b. Jan 17 00:50:13.034498 containerd[1452]: time="2026-01-17T00:50:13.034359561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqwtk,Uid:ffe1a08e-4ec3-4a81-8d73-35d206d5032f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4eb6a8a83ff0deb754ae7a4a61a468b28d838d35c68036f60fd6200dcb5b531b\"" Jan 17 00:50:13.035236 kubelet[2496]: E0117 00:50:13.035164 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:13.041808 containerd[1452]: time="2026-01-17T00:50:13.041648563Z" level=info msg="CreateContainer within sandbox \"4eb6a8a83ff0deb754ae7a4a61a468b28d838d35c68036f60fd6200dcb5b531b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:50:13.062448 containerd[1452]: time="2026-01-17T00:50:13.062345593Z" level=info msg="CreateContainer within sandbox \"4eb6a8a83ff0deb754ae7a4a61a468b28d838d35c68036f60fd6200dcb5b531b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f56162bbe2cb981c027b31d6d15a66d76ad8d89df661db4cdf9f2ec9728f3042\"" Jan 17 00:50:13.063461 containerd[1452]: time="2026-01-17T00:50:13.063298162Z" level=info msg="StartContainer for \"f56162bbe2cb981c027b31d6d15a66d76ad8d89df661db4cdf9f2ec9728f3042\"" Jan 17 00:50:13.067303 kubelet[2496]: I0117 00:50:13.067228 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfsnz\" (UniqueName: \"kubernetes.io/projected/3410893e-7dea-4e69-a316-ae0a20425806-kube-api-access-rfsnz\") pod \"tigera-operator-65cdcdfd6d-947wd\" (UID: \"3410893e-7dea-4e69-a316-ae0a20425806\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-947wd" Jan 17 00:50:13.067303 kubelet[2496]: I0117 00:50:13.067295 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3410893e-7dea-4e69-a316-ae0a20425806-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-947wd\" (UID: \"3410893e-7dea-4e69-a316-ae0a20425806\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-947wd" Jan 17 00:50:13.106992 systemd[1]: Started cri-containerd-f56162bbe2cb981c027b31d6d15a66d76ad8d89df661db4cdf9f2ec9728f3042.scope - libcontainer container f56162bbe2cb981c027b31d6d15a66d76ad8d89df661db4cdf9f2ec9728f3042. Jan 17 00:50:13.151531 containerd[1452]: time="2026-01-17T00:50:13.151437299Z" level=info msg="StartContainer for \"f56162bbe2cb981c027b31d6d15a66d76ad8d89df661db4cdf9f2ec9728f3042\" returns successfully" Jan 17 00:50:13.244167 containerd[1452]: time="2026-01-17T00:50:13.243350658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-947wd,Uid:3410893e-7dea-4e69-a316-ae0a20425806,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:50:13.281182 containerd[1452]: time="2026-01-17T00:50:13.280656724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:13.281182 containerd[1452]: time="2026-01-17T00:50:13.280868565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:13.281182 containerd[1452]: time="2026-01-17T00:50:13.280895435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:13.281182 containerd[1452]: time="2026-01-17T00:50:13.281087149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:13.309303 systemd[1]: Started cri-containerd-ec8892fcd6134d6c9fffbe538b7490069ace8512e9a94b105061669ca6007eb2.scope - libcontainer container ec8892fcd6134d6c9fffbe538b7490069ace8512e9a94b105061669ca6007eb2. Jan 17 00:50:13.363896 containerd[1452]: time="2026-01-17T00:50:13.363755910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-947wd,Uid:3410893e-7dea-4e69-a316-ae0a20425806,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ec8892fcd6134d6c9fffbe538b7490069ace8512e9a94b105061669ca6007eb2\"" Jan 17 00:50:13.368104 containerd[1452]: time="2026-01-17T00:50:13.368021842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:50:13.483372 kubelet[2496]: E0117 00:50:13.483325 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:13.483526 kubelet[2496]: E0117 00:50:13.483404 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:13.879869 systemd[1]: run-containerd-runc-k8s.io-4eb6a8a83ff0deb754ae7a4a61a468b28d838d35c68036f60fd6200dcb5b531b-runc.meGx6J.mount: Deactivated successfully. Jan 17 00:50:15.166356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1561447635.mount: Deactivated successfully. Jan 17 00:50:17.184379 kubelet[2496]: E0117 00:50:17.183019 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:17.197999 kubelet[2496]: I0117 00:50:17.197816 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hqwtk" podStartSLOduration=6.197798408 podStartE2EDuration="6.197798408s" podCreationTimestamp="2026-01-17 00:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:50:13.494590999 +0000 UTC m=+8.196403895" watchObservedRunningTime="2026-01-17 00:50:17.197798408 +0000 UTC m=+11.899611284" Jan 17 00:50:18.685244 containerd[1452]: time="2026-01-17T00:50:18.685044370Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:18.686513 containerd[1452]: time="2026-01-17T00:50:18.686415109Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:50:18.689037 containerd[1452]: time="2026-01-17T00:50:18.688579408Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:18.692158 containerd[1452]: time="2026-01-17T00:50:18.691957435Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:18.693362 containerd[1452]: time="2026-01-17T00:50:18.693208635Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.325118387s" Jan 17 00:50:18.693362 containerd[1452]: time="2026-01-17T00:50:18.693254269Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:50:18.707093 kubelet[2496]: E0117 00:50:18.707041 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:18.710457 containerd[1452]: time="2026-01-17T00:50:18.710284216Z" level=info msg="CreateContainer within sandbox \"ec8892fcd6134d6c9fffbe538b7490069ace8512e9a94b105061669ca6007eb2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:50:18.735876 containerd[1452]: time="2026-01-17T00:50:18.735636783Z" level=info msg="CreateContainer within sandbox \"ec8892fcd6134d6c9fffbe538b7490069ace8512e9a94b105061669ca6007eb2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d63d31e9b1d7f2bee975e45fd432b5d6334db4fee745229efa0b2db9c141b9c4\"" Jan 17 00:50:18.738634 containerd[1452]: time="2026-01-17T00:50:18.738567009Z" level=info msg="StartContainer for \"d63d31e9b1d7f2bee975e45fd432b5d6334db4fee745229efa0b2db9c141b9c4\"" Jan 17 00:50:18.785033 systemd[1]: Started cri-containerd-d63d31e9b1d7f2bee975e45fd432b5d6334db4fee745229efa0b2db9c141b9c4.scope - libcontainer container d63d31e9b1d7f2bee975e45fd432b5d6334db4fee745229efa0b2db9c141b9c4. Jan 17 00:50:18.878424 containerd[1452]: time="2026-01-17T00:50:18.878375342Z" level=info msg="StartContainer for \"d63d31e9b1d7f2bee975e45fd432b5d6334db4fee745229efa0b2db9c141b9c4\" returns successfully" Jan 17 00:50:19.352976 update_engine[1441]: I20260117 00:50:19.352798 1441 update_attempter.cc:509] Updating boot flags... Jan 17 00:50:19.392793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2862) Jan 17 00:50:19.456058 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2865) Jan 17 00:50:19.516969 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2865) Jan 17 00:50:19.525435 kubelet[2496]: I0117 00:50:19.525111 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-947wd" podStartSLOduration=2.18682649 podStartE2EDuration="7.525093876s" podCreationTimestamp="2026-01-17 00:50:12 +0000 UTC" firstStartedPulling="2026-01-17 00:50:13.365595291 +0000 UTC m=+8.067408166" lastFinishedPulling="2026-01-17 00:50:18.703862666 +0000 UTC m=+13.405675552" observedRunningTime="2026-01-17 00:50:19.525051226 +0000 UTC m=+14.226864102" watchObservedRunningTime="2026-01-17 00:50:19.525093876 +0000 UTC m=+14.226906751" Jan 17 00:50:24.704683 sudo[1630]: pam_unix(sudo:session): session closed for user root Jan 17 00:50:24.711057 sshd[1627]: pam_unix(sshd:session): session closed for user core Jan 17 00:50:24.722171 systemd[1]: sshd@6-10.0.0.159:22-10.0.0.1:54426.service: Deactivated successfully. Jan 17 00:50:24.726637 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:50:24.727346 systemd[1]: session-7.scope: Consumed 7.432s CPU time, 161.0M memory peak, 0B memory swap peak. Jan 17 00:50:24.733517 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:50:24.737951 systemd-logind[1432]: Removed session 7. Jan 17 00:50:29.912880 systemd[1]: Created slice kubepods-besteffort-podbe64011a_2a64_471a_b2d0_b3e4f236edeb.slice - libcontainer container kubepods-besteffort-podbe64011a_2a64_471a_b2d0_b3e4f236edeb.slice. Jan 17 00:50:30.005137 kubelet[2496]: I0117 00:50:30.005066 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be64011a-2a64-471a-b2d0-b3e4f236edeb-tigera-ca-bundle\") pod \"calico-typha-65df64d45d-7qpsp\" (UID: \"be64011a-2a64-471a-b2d0-b3e4f236edeb\") " pod="calico-system/calico-typha-65df64d45d-7qpsp" Jan 17 00:50:30.005637 kubelet[2496]: I0117 00:50:30.005146 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/be64011a-2a64-471a-b2d0-b3e4f236edeb-typha-certs\") pod \"calico-typha-65df64d45d-7qpsp\" (UID: \"be64011a-2a64-471a-b2d0-b3e4f236edeb\") " pod="calico-system/calico-typha-65df64d45d-7qpsp" Jan 17 00:50:30.005637 kubelet[2496]: I0117 00:50:30.005173 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7brk\" (UniqueName: \"kubernetes.io/projected/be64011a-2a64-471a-b2d0-b3e4f236edeb-kube-api-access-c7brk\") pod \"calico-typha-65df64d45d-7qpsp\" (UID: \"be64011a-2a64-471a-b2d0-b3e4f236edeb\") " pod="calico-system/calico-typha-65df64d45d-7qpsp" Jan 17 00:50:30.216355 systemd[1]: Created slice kubepods-besteffort-pod5a34585f_e3b9_4cdd_bcc7_2f973e696495.slice - libcontainer container kubepods-besteffort-pod5a34585f_e3b9_4cdd_bcc7_2f973e696495.slice. Jan 17 00:50:30.223373 kubelet[2496]: E0117 00:50:30.223135 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:30.224954 containerd[1452]: time="2026-01-17T00:50:30.224902541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65df64d45d-7qpsp,Uid:be64011a-2a64-471a-b2d0-b3e4f236edeb,Namespace:calico-system,Attempt:0,}" Jan 17 00:50:30.275275 containerd[1452]: time="2026-01-17T00:50:30.272968676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:30.275275 containerd[1452]: time="2026-01-17T00:50:30.273034849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:30.275275 containerd[1452]: time="2026-01-17T00:50:30.273118705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:30.275275 containerd[1452]: time="2026-01-17T00:50:30.273333574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:30.361566 systemd[1]: Started cri-containerd-6c69bdc3f71d82638f96f807b36da083729bccabe1adb419e430a36f8902f672.scope - libcontainer container 6c69bdc3f71d82638f96f807b36da083729bccabe1adb419e430a36f8902f672. Jan 17 00:50:30.396604 kubelet[2496]: E0117 00:50:30.396487 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:30.407564 kubelet[2496]: I0117 00:50:30.407216 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a34585f-e3b9-4cdd-bcc7-2f973e696495-tigera-ca-bundle\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407564 kubelet[2496]: I0117 00:50:30.407259 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-cni-bin-dir\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407564 kubelet[2496]: I0117 00:50:30.407283 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-var-run-calico\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407564 kubelet[2496]: I0117 00:50:30.407306 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-var-lib-calico\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407564 kubelet[2496]: I0117 00:50:30.407331 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-lib-modules\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407932 kubelet[2496]: I0117 00:50:30.407352 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a34585f-e3b9-4cdd-bcc7-2f973e696495-node-certs\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407932 kubelet[2496]: I0117 00:50:30.407372 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-xtables-lock\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407932 kubelet[2496]: I0117 00:50:30.407445 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2thh\" (UniqueName: \"kubernetes.io/projected/5a34585f-e3b9-4cdd-bcc7-2f973e696495-kube-api-access-x2thh\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407932 kubelet[2496]: I0117 00:50:30.407526 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-cni-log-dir\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.407932 kubelet[2496]: I0117 00:50:30.407587 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-cni-net-dir\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.408239 kubelet[2496]: I0117 00:50:30.407619 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-flexvol-driver-host\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.408239 kubelet[2496]: I0117 00:50:30.407636 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a34585f-e3b9-4cdd-bcc7-2f973e696495-policysync\") pod \"calico-node-jk6xz\" (UID: \"5a34585f-e3b9-4cdd-bcc7-2f973e696495\") " pod="calico-system/calico-node-jk6xz" Jan 17 00:50:30.464536 containerd[1452]: time="2026-01-17T00:50:30.464400955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65df64d45d-7qpsp,Uid:be64011a-2a64-471a-b2d0-b3e4f236edeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c69bdc3f71d82638f96f807b36da083729bccabe1adb419e430a36f8902f672\"" Jan 17 00:50:30.468170 kubelet[2496]: E0117 00:50:30.466958 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:30.469844 containerd[1452]: time="2026-01-17T00:50:30.469523001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:50:30.508635 kubelet[2496]: I0117 00:50:30.508567 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4022344e-59ba-4aec-9ee8-9c1779407c17-socket-dir\") pod \"csi-node-driver-8pldn\" (UID: \"4022344e-59ba-4aec-9ee8-9c1779407c17\") " pod="calico-system/csi-node-driver-8pldn" Jan 17 00:50:30.508635 kubelet[2496]: I0117 00:50:30.508624 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4022344e-59ba-4aec-9ee8-9c1779407c17-varrun\") pod \"csi-node-driver-8pldn\" (UID: \"4022344e-59ba-4aec-9ee8-9c1779407c17\") " pod="calico-system/csi-node-driver-8pldn" Jan 17 00:50:30.508990 kubelet[2496]: I0117 00:50:30.508673 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4022344e-59ba-4aec-9ee8-9c1779407c17-kubelet-dir\") pod \"csi-node-driver-8pldn\" (UID: \"4022344e-59ba-4aec-9ee8-9c1779407c17\") " pod="calico-system/csi-node-driver-8pldn" Jan 17 00:50:30.508990 kubelet[2496]: I0117 00:50:30.508801 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t97f\" (UniqueName: \"kubernetes.io/projected/4022344e-59ba-4aec-9ee8-9c1779407c17-kube-api-access-7t97f\") pod \"csi-node-driver-8pldn\" (UID: \"4022344e-59ba-4aec-9ee8-9c1779407c17\") " pod="calico-system/csi-node-driver-8pldn" Jan 17 00:50:30.508990 kubelet[2496]: I0117 00:50:30.508861 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4022344e-59ba-4aec-9ee8-9c1779407c17-registration-dir\") pod \"csi-node-driver-8pldn\" (UID: \"4022344e-59ba-4aec-9ee8-9c1779407c17\") " pod="calico-system/csi-node-driver-8pldn" Jan 17 00:50:30.512595 kubelet[2496]: E0117 00:50:30.512576 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.512817 kubelet[2496]: W0117 00:50:30.512664 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.512817 kubelet[2496]: E0117 00:50:30.512787 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.515345 kubelet[2496]: E0117 00:50:30.515302 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.515345 kubelet[2496]: W0117 00:50:30.515339 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.515445 kubelet[2496]: E0117 00:50:30.515356 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.519082 kubelet[2496]: E0117 00:50:30.518967 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.519082 kubelet[2496]: W0117 00:50:30.519013 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.519082 kubelet[2496]: E0117 00:50:30.519032 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.523995 kubelet[2496]: E0117 00:50:30.523917 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:30.524669 containerd[1452]: time="2026-01-17T00:50:30.524630092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jk6xz,Uid:5a34585f-e3b9-4cdd-bcc7-2f973e696495,Namespace:calico-system,Attempt:0,}" Jan 17 00:50:30.573995 containerd[1452]: time="2026-01-17T00:50:30.573311866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:30.573995 containerd[1452]: time="2026-01-17T00:50:30.573652940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:30.573995 containerd[1452]: time="2026-01-17T00:50:30.573674530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:30.575455 containerd[1452]: time="2026-01-17T00:50:30.574637733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:30.609957 kubelet[2496]: E0117 00:50:30.609909 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.609957 kubelet[2496]: W0117 00:50:30.609935 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.610079 kubelet[2496]: E0117 00:50:30.609960 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.610656 kubelet[2496]: E0117 00:50:30.610589 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.610656 kubelet[2496]: W0117 00:50:30.610633 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.610656 kubelet[2496]: E0117 00:50:30.610645 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.611328 kubelet[2496]: E0117 00:50:30.611243 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.611328 kubelet[2496]: W0117 00:50:30.611283 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.611328 kubelet[2496]: E0117 00:50:30.611293 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.611658 kubelet[2496]: E0117 00:50:30.611576 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.611658 kubelet[2496]: W0117 00:50:30.611614 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.611658 kubelet[2496]: E0117 00:50:30.611623 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.612165 systemd[1]: Started cri-containerd-e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da.scope - libcontainer container e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da. Jan 17 00:50:30.612687 kubelet[2496]: E0117 00:50:30.612625 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.612687 kubelet[2496]: W0117 00:50:30.612669 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.612687 kubelet[2496]: E0117 00:50:30.612679 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.613359 kubelet[2496]: E0117 00:50:30.613270 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.613359 kubelet[2496]: W0117 00:50:30.613284 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.613359 kubelet[2496]: E0117 00:50:30.613294 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.614309 kubelet[2496]: E0117 00:50:30.613931 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.614309 kubelet[2496]: W0117 00:50:30.613941 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.614309 kubelet[2496]: E0117 00:50:30.613951 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.614411 kubelet[2496]: E0117 00:50:30.614375 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.614411 kubelet[2496]: W0117 00:50:30.614385 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.614411 kubelet[2496]: E0117 00:50:30.614393 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.615048 kubelet[2496]: E0117 00:50:30.614942 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.615048 kubelet[2496]: W0117 00:50:30.614988 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.615048 kubelet[2496]: E0117 00:50:30.614998 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.615341 kubelet[2496]: E0117 00:50:30.615314 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.615341 kubelet[2496]: W0117 00:50:30.615327 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.615341 kubelet[2496]: E0117 00:50:30.615335 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.616114 kubelet[2496]: E0117 00:50:30.615811 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.616114 kubelet[2496]: W0117 00:50:30.615824 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.616114 kubelet[2496]: E0117 00:50:30.615833 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.616228 kubelet[2496]: E0117 00:50:30.616192 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.616228 kubelet[2496]: W0117 00:50:30.616205 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.616228 kubelet[2496]: E0117 00:50:30.616217 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.616928 kubelet[2496]: E0117 00:50:30.616656 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.616928 kubelet[2496]: W0117 00:50:30.616786 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.616928 kubelet[2496]: E0117 00:50:30.616796 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.617435 kubelet[2496]: E0117 00:50:30.617334 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.617435 kubelet[2496]: W0117 00:50:30.617368 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.617435 kubelet[2496]: E0117 00:50:30.617379 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.618591 kubelet[2496]: E0117 00:50:30.618535 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.618591 kubelet[2496]: W0117 00:50:30.618547 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.618591 kubelet[2496]: E0117 00:50:30.618556 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.619288 kubelet[2496]: E0117 00:50:30.619252 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.619288 kubelet[2496]: W0117 00:50:30.619289 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.619288 kubelet[2496]: E0117 00:50:30.619299 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.619804 kubelet[2496]: E0117 00:50:30.619608 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.619804 kubelet[2496]: W0117 00:50:30.619642 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.619804 kubelet[2496]: E0117 00:50:30.619651 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.620153 kubelet[2496]: E0117 00:50:30.620073 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.620153 kubelet[2496]: W0117 00:50:30.620107 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.620153 kubelet[2496]: E0117 00:50:30.620116 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.620587 kubelet[2496]: E0117 00:50:30.620534 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.620587 kubelet[2496]: W0117 00:50:30.620546 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.620587 kubelet[2496]: E0117 00:50:30.620555 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.621927 kubelet[2496]: E0117 00:50:30.621355 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.621927 kubelet[2496]: W0117 00:50:30.621365 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.621927 kubelet[2496]: E0117 00:50:30.621375 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.622168 kubelet[2496]: E0117 00:50:30.622033 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.622168 kubelet[2496]: W0117 00:50:30.622044 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.622168 kubelet[2496]: E0117 00:50:30.622053 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.622508 kubelet[2496]: E0117 00:50:30.622496 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.622566 kubelet[2496]: W0117 00:50:30.622556 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.622800 kubelet[2496]: E0117 00:50:30.622610 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.623222 kubelet[2496]: E0117 00:50:30.623210 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.623272 kubelet[2496]: W0117 00:50:30.623261 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.623327 kubelet[2496]: E0117 00:50:30.623316 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.623922 kubelet[2496]: E0117 00:50:30.623910 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.623977 kubelet[2496]: W0117 00:50:30.623967 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.624033 kubelet[2496]: E0117 00:50:30.624022 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.624474 kubelet[2496]: E0117 00:50:30.624462 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.624851 kubelet[2496]: W0117 00:50:30.624517 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.624851 kubelet[2496]: E0117 00:50:30.624530 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.635271 kubelet[2496]: E0117 00:50:30.635257 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:50:30.635360 kubelet[2496]: W0117 00:50:30.635327 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:50:30.635360 kubelet[2496]: E0117 00:50:30.635341 2496 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:50:30.653024 containerd[1452]: time="2026-01-17T00:50:30.652870752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jk6xz,Uid:5a34585f-e3b9-4cdd-bcc7-2f973e696495,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da\"" Jan 17 00:50:30.654428 kubelet[2496]: E0117 00:50:30.654112 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:31.119499 systemd[1]: run-containerd-runc-k8s.io-6c69bdc3f71d82638f96f807b36da083729bccabe1adb419e430a36f8902f672-runc.BlTYzS.mount: Deactivated successfully. Jan 17 00:50:31.788855 containerd[1452]: time="2026-01-17T00:50:31.788575278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:31.790655 containerd[1452]: time="2026-01-17T00:50:31.790462270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:50:31.792293 containerd[1452]: time="2026-01-17T00:50:31.792051439Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:31.795610 containerd[1452]: time="2026-01-17T00:50:31.795543052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:31.796478 containerd[1452]: time="2026-01-17T00:50:31.796406553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.326836835s" Jan 17 00:50:31.796533 containerd[1452]: time="2026-01-17T00:50:31.796482605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:50:31.797839 containerd[1452]: time="2026-01-17T00:50:31.797789391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:50:31.811610 containerd[1452]: time="2026-01-17T00:50:31.811549746Z" level=info msg="CreateContainer within sandbox \"6c69bdc3f71d82638f96f807b36da083729bccabe1adb419e430a36f8902f672\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:50:31.830033 containerd[1452]: time="2026-01-17T00:50:31.829933165Z" level=info msg="CreateContainer within sandbox \"6c69bdc3f71d82638f96f807b36da083729bccabe1adb419e430a36f8902f672\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"580189813f152a79f9a9e77cb43d5d17906233f7c67e5e7b20948a144ddac8a6\"" Jan 17 00:50:31.830432 containerd[1452]: time="2026-01-17T00:50:31.830408026Z" level=info msg="StartContainer for \"580189813f152a79f9a9e77cb43d5d17906233f7c67e5e7b20948a144ddac8a6\"" Jan 17 00:50:31.876917 systemd[1]: Started cri-containerd-580189813f152a79f9a9e77cb43d5d17906233f7c67e5e7b20948a144ddac8a6.scope - libcontainer container 580189813f152a79f9a9e77cb43d5d17906233f7c67e5e7b20948a144ddac8a6. Jan 17 00:50:31.943809 containerd[1452]: time="2026-01-17T00:50:31.943607630Z" level=info msg="StartContainer for \"580189813f152a79f9a9e77cb43d5d17906233f7c67e5e7b20948a144ddac8a6\" returns successfully" Jan 17 00:50:32.424877 containerd[1452]: time="2026-01-17T00:50:32.424829648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:32.427130 containerd[1452]: time="2026-01-17T00:50:32.426940599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:50:32.428551 containerd[1452]: time="2026-01-17T00:50:32.428515296Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:32.432601 containerd[1452]: time="2026-01-17T00:50:32.432545014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:32.433639 containerd[1452]: time="2026-01-17T00:50:32.433567659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 635.723176ms" Jan 17 00:50:32.433876 containerd[1452]: time="2026-01-17T00:50:32.433646005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:50:32.440799 containerd[1452]: time="2026-01-17T00:50:32.440637807Z" level=info msg="CreateContainer within sandbox \"e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:50:32.444956 kubelet[2496]: E0117 00:50:32.442508 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:32.461540 containerd[1452]: time="2026-01-17T00:50:32.461411300Z" level=info msg="CreateContainer within sandbox \"e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a\"" Jan 17 00:50:32.462482 containerd[1452]: time="2026-01-17T00:50:32.462428736Z" level=info msg="StartContainer for \"f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a\"" Jan 17 00:50:32.513939 systemd[1]: Started cri-containerd-f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a.scope - libcontainer container f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a. Jan 17 00:50:32.555625 kubelet[2496]: E0117 00:50:32.553363 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:32.571306 containerd[1452]: time="2026-01-17T00:50:32.570469890Z" level=info msg="StartContainer for \"f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a\" returns successfully" Jan 17 00:50:32.573319 kubelet[2496]: I0117 00:50:32.573121 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65df64d45d-7qpsp" podStartSLOduration=2.244195051 podStartE2EDuration="3.572835472s" podCreationTimestamp="2026-01-17 00:50:29 +0000 UTC" firstStartedPulling="2026-01-17 00:50:30.46890399 +0000 UTC m=+25.170716865" lastFinishedPulling="2026-01-17 00:50:31.7975444 +0000 UTC m=+26.499357286" observedRunningTime="2026-01-17 00:50:32.570022668 +0000 UTC m=+27.271835544" watchObservedRunningTime="2026-01-17 00:50:32.572835472 +0000 UTC m=+27.274648359" Jan 17 00:50:32.596453 systemd[1]: cri-containerd-f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a.scope: Deactivated successfully. Jan 17 00:50:32.736254 containerd[1452]: time="2026-01-17T00:50:32.736022494Z" level=info msg="shim disconnected" id=f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a namespace=k8s.io Jan 17 00:50:32.736254 containerd[1452]: time="2026-01-17T00:50:32.736079741Z" level=warning msg="cleaning up after shim disconnected" id=f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a namespace=k8s.io Jan 17 00:50:32.736254 containerd[1452]: time="2026-01-17T00:50:32.736090561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:50:32.757409 containerd[1452]: time="2026-01-17T00:50:32.757285844Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:50:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:50:33.114684 systemd[1]: run-containerd-runc-k8s.io-f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a-runc.NNYKlP.mount: Deactivated successfully. Jan 17 00:50:33.114971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7105fb0d183b02bd82b486d7a57b3674cbf427c7a1d84720fe794b786ecbc6a-rootfs.mount: Deactivated successfully. Jan 17 00:50:33.563578 kubelet[2496]: I0117 00:50:33.563416 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:50:33.564112 kubelet[2496]: E0117 00:50:33.563925 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:33.564112 kubelet[2496]: E0117 00:50:33.564088 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:33.566506 containerd[1452]: time="2026-01-17T00:50:33.565861395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:50:34.443099 kubelet[2496]: E0117 00:50:34.442972 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:35.627164 containerd[1452]: time="2026-01-17T00:50:35.627067707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:35.628207 containerd[1452]: time="2026-01-17T00:50:35.628145298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:50:35.629527 containerd[1452]: time="2026-01-17T00:50:35.629468719Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:35.632591 containerd[1452]: time="2026-01-17T00:50:35.632433821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:35.633844 containerd[1452]: time="2026-01-17T00:50:35.633644006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.067737007s" Jan 17 00:50:35.633844 containerd[1452]: time="2026-01-17T00:50:35.633822118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:50:35.640658 containerd[1452]: time="2026-01-17T00:50:35.640546975Z" level=info msg="CreateContainer within sandbox \"e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:50:35.663981 containerd[1452]: time="2026-01-17T00:50:35.663903118Z" level=info msg="CreateContainer within sandbox \"e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3\"" Jan 17 00:50:35.665955 containerd[1452]: time="2026-01-17T00:50:35.665847963Z" level=info msg="StartContainer for \"3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3\"" Jan 17 00:50:35.710162 systemd[1]: Started cri-containerd-3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3.scope - libcontainer container 3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3. Jan 17 00:50:35.758403 containerd[1452]: time="2026-01-17T00:50:35.758186991Z" level=info msg="StartContainer for \"3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3\" returns successfully" Jan 17 00:50:36.443420 kubelet[2496]: E0117 00:50:36.443213 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:36.466381 systemd[1]: cri-containerd-3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3.scope: Deactivated successfully. Jan 17 00:50:36.491002 kubelet[2496]: I0117 00:50:36.490802 2496 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:50:36.498578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3-rootfs.mount: Deactivated successfully. Jan 17 00:50:36.575595 kubelet[2496]: E0117 00:50:36.573647 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:36.585133 containerd[1452]: time="2026-01-17T00:50:36.583637041Z" level=info msg="shim disconnected" id=3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3 namespace=k8s.io Jan 17 00:50:36.585133 containerd[1452]: time="2026-01-17T00:50:36.584095354Z" level=warning msg="cleaning up after shim disconnected" id=3b709cf42feb73f1d0d50c660b387b16536f354d181515c5bb69c58a884544d3 namespace=k8s.io Jan 17 00:50:36.585133 containerd[1452]: time="2026-01-17T00:50:36.585077043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:50:36.586651 systemd[1]: Created slice kubepods-besteffort-pod4861c4dc_4420_41d7_806f_ea096c9baa96.slice - libcontainer container kubepods-besteffort-pod4861c4dc_4420_41d7_806f_ea096c9baa96.slice. Jan 17 00:50:36.600815 systemd[1]: Created slice kubepods-besteffort-poda676821e_dbbe_4544_a442_1cc84fb0d568.slice - libcontainer container kubepods-besteffort-poda676821e_dbbe_4544_a442_1cc84fb0d568.slice. Jan 17 00:50:36.608478 systemd[1]: Created slice kubepods-burstable-pod00ae415f_67f7_4e67_a9d9_4d68f93ea018.slice - libcontainer container kubepods-burstable-pod00ae415f_67f7_4e67_a9d9_4d68f93ea018.slice. Jan 17 00:50:36.624859 systemd[1]: Created slice kubepods-burstable-podc965ee07_9ebc_4401_bd94_6f4cb9cb8928.slice - libcontainer container kubepods-burstable-podc965ee07_9ebc_4401_bd94_6f4cb9cb8928.slice. Jan 17 00:50:36.639239 containerd[1452]: time="2026-01-17T00:50:36.639070256Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:50:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:50:36.640286 systemd[1]: Created slice kubepods-besteffort-podfff518d5_06d5_4f2e_9a9a_f374cb758607.slice - libcontainer container kubepods-besteffort-podfff518d5_06d5_4f2e_9a9a_f374cb758607.slice. Jan 17 00:50:36.649805 systemd[1]: Created slice kubepods-besteffort-pode797004f_4966_4738_8311_6962046bba3a.slice - libcontainer container kubepods-besteffort-pode797004f_4966_4738_8311_6962046bba3a.slice. Jan 17 00:50:36.656387 systemd[1]: Created slice kubepods-besteffort-podd9a48e4c_2642_431f_9b1f_b247428bfac1.slice - libcontainer container kubepods-besteffort-podd9a48e4c_2642_431f_9b1f_b247428bfac1.slice. Jan 17 00:50:36.664202 kubelet[2496]: I0117 00:50:36.663801 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgk6b\" (UniqueName: \"kubernetes.io/projected/fff518d5-06d5-4f2e-9a9a-f374cb758607-kube-api-access-mgk6b\") pod \"goldmane-7c778bb748-s7ntg\" (UID: \"fff518d5-06d5-4f2e-9a9a-f374cb758607\") " pod="calico-system/goldmane-7c778bb748-s7ntg" Jan 17 00:50:36.664202 kubelet[2496]: I0117 00:50:36.663847 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9a48e4c-2642-431f-9b1f-b247428bfac1-calico-apiserver-certs\") pod \"calico-apiserver-59cdfd4dfb-d7ft6\" (UID: \"d9a48e4c-2642-431f-9b1f-b247428bfac1\") " pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" Jan 17 00:50:36.664202 kubelet[2496]: I0117 00:50:36.663876 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm5k9\" (UniqueName: \"kubernetes.io/projected/e797004f-4966-4738-8311-6962046bba3a-kube-api-access-rm5k9\") pod \"calico-apiserver-59cdfd4dfb-nd9rl\" (UID: \"e797004f-4966-4738-8311-6962046bba3a\") " pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" Jan 17 00:50:36.664202 kubelet[2496]: I0117 00:50:36.663918 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff518d5-06d5-4f2e-9a9a-f374cb758607-config\") pod \"goldmane-7c778bb748-s7ntg\" (UID: \"fff518d5-06d5-4f2e-9a9a-f374cb758607\") " pod="calico-system/goldmane-7c778bb748-s7ntg" Jan 17 00:50:36.665170 kubelet[2496]: I0117 00:50:36.665140 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vph2s\" (UniqueName: \"kubernetes.io/projected/a676821e-dbbe-4544-a442-1cc84fb0d568-kube-api-access-vph2s\") pod \"whisker-7c79dcf7c7-p9s7n\" (UID: \"a676821e-dbbe-4544-a442-1cc84fb0d568\") " pod="calico-system/whisker-7c79dcf7c7-p9s7n" Jan 17 00:50:36.665227 kubelet[2496]: I0117 00:50:36.665174 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fff518d5-06d5-4f2e-9a9a-f374cb758607-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-s7ntg\" (UID: \"fff518d5-06d5-4f2e-9a9a-f374cb758607\") " pod="calico-system/goldmane-7c778bb748-s7ntg" Jan 17 00:50:36.665227 kubelet[2496]: I0117 00:50:36.665196 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fff518d5-06d5-4f2e-9a9a-f374cb758607-goldmane-key-pair\") pod \"goldmane-7c778bb748-s7ntg\" (UID: \"fff518d5-06d5-4f2e-9a9a-f374cb758607\") " pod="calico-system/goldmane-7c778bb748-s7ntg" Jan 17 00:50:36.665227 kubelet[2496]: I0117 00:50:36.665221 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00ae415f-67f7-4e67-a9d9-4d68f93ea018-config-volume\") pod \"coredns-66bc5c9577-gm8m8\" (UID: \"00ae415f-67f7-4e67-a9d9-4d68f93ea018\") " pod="kube-system/coredns-66bc5c9577-gm8m8" Jan 17 00:50:36.665319 kubelet[2496]: I0117 00:50:36.665245 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzwcp\" (UniqueName: \"kubernetes.io/projected/d9a48e4c-2642-431f-9b1f-b247428bfac1-kube-api-access-zzwcp\") pod \"calico-apiserver-59cdfd4dfb-d7ft6\" (UID: \"d9a48e4c-2642-431f-9b1f-b247428bfac1\") " pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" Jan 17 00:50:36.665319 kubelet[2496]: I0117 00:50:36.665270 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4861c4dc-4420-41d7-806f-ea096c9baa96-tigera-ca-bundle\") pod \"calico-kube-controllers-7b6b7bfc9b-vp5zs\" (UID: \"4861c4dc-4420-41d7-806f-ea096c9baa96\") " pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" Jan 17 00:50:36.665385 kubelet[2496]: I0117 00:50:36.665327 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-backend-key-pair\") pod \"whisker-7c79dcf7c7-p9s7n\" (UID: \"a676821e-dbbe-4544-a442-1cc84fb0d568\") " pod="calico-system/whisker-7c79dcf7c7-p9s7n" Jan 17 00:50:36.665385 kubelet[2496]: I0117 00:50:36.665352 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c965ee07-9ebc-4401-bd94-6f4cb9cb8928-config-volume\") pod \"coredns-66bc5c9577-dvm5c\" (UID: \"c965ee07-9ebc-4401-bd94-6f4cb9cb8928\") " pod="kube-system/coredns-66bc5c9577-dvm5c" Jan 17 00:50:36.665385 kubelet[2496]: I0117 00:50:36.665377 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czvr\" (UniqueName: \"kubernetes.io/projected/c965ee07-9ebc-4401-bd94-6f4cb9cb8928-kube-api-access-7czvr\") pod \"coredns-66bc5c9577-dvm5c\" (UID: \"c965ee07-9ebc-4401-bd94-6f4cb9cb8928\") " pod="kube-system/coredns-66bc5c9577-dvm5c" Jan 17 00:50:36.665476 kubelet[2496]: I0117 00:50:36.665403 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mw68\" (UniqueName: \"kubernetes.io/projected/00ae415f-67f7-4e67-a9d9-4d68f93ea018-kube-api-access-9mw68\") pod \"coredns-66bc5c9577-gm8m8\" (UID: \"00ae415f-67f7-4e67-a9d9-4d68f93ea018\") " pod="kube-system/coredns-66bc5c9577-gm8m8" Jan 17 00:50:36.665513 kubelet[2496]: I0117 00:50:36.665486 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfsds\" (UniqueName: \"kubernetes.io/projected/4861c4dc-4420-41d7-806f-ea096c9baa96-kube-api-access-hfsds\") pod \"calico-kube-controllers-7b6b7bfc9b-vp5zs\" (UID: \"4861c4dc-4420-41d7-806f-ea096c9baa96\") " pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" Jan 17 00:50:36.665543 kubelet[2496]: I0117 00:50:36.665519 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-ca-bundle\") pod \"whisker-7c79dcf7c7-p9s7n\" (UID: \"a676821e-dbbe-4544-a442-1cc84fb0d568\") " pod="calico-system/whisker-7c79dcf7c7-p9s7n" Jan 17 00:50:36.665575 kubelet[2496]: I0117 00:50:36.665544 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e797004f-4966-4738-8311-6962046bba3a-calico-apiserver-certs\") pod \"calico-apiserver-59cdfd4dfb-nd9rl\" (UID: \"e797004f-4966-4738-8311-6962046bba3a\") " pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" Jan 17 00:50:36.903247 containerd[1452]: time="2026-01-17T00:50:36.903044141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6b7bfc9b-vp5zs,Uid:4861c4dc-4420-41d7-806f-ea096c9baa96,Namespace:calico-system,Attempt:0,}" Jan 17 00:50:36.907322 containerd[1452]: time="2026-01-17T00:50:36.907189523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c79dcf7c7-p9s7n,Uid:a676821e-dbbe-4544-a442-1cc84fb0d568,Namespace:calico-system,Attempt:0,}" Jan 17 00:50:36.919844 kubelet[2496]: E0117 00:50:36.919652 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:36.920974 containerd[1452]: time="2026-01-17T00:50:36.920890726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gm8m8,Uid:00ae415f-67f7-4e67-a9d9-4d68f93ea018,Namespace:kube-system,Attempt:0,}" Jan 17 00:50:36.937505 kubelet[2496]: E0117 00:50:36.937209 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:36.937899 containerd[1452]: time="2026-01-17T00:50:36.937647706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvm5c,Uid:c965ee07-9ebc-4401-bd94-6f4cb9cb8928,Namespace:kube-system,Attempt:0,}" Jan 17 00:50:36.963285 containerd[1452]: time="2026-01-17T00:50:36.963186504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-nd9rl,Uid:e797004f-4966-4738-8311-6962046bba3a,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:50:36.964499 containerd[1452]: time="2026-01-17T00:50:36.964078351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-s7ntg,Uid:fff518d5-06d5-4f2e-9a9a-f374cb758607,Namespace:calico-system,Attempt:0,}" Jan 17 00:50:36.975951 containerd[1452]: time="2026-01-17T00:50:36.975440995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-d7ft6,Uid:d9a48e4c-2642-431f-9b1f-b247428bfac1,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:50:37.130946 containerd[1452]: time="2026-01-17T00:50:37.130897971Z" level=error msg="Failed to destroy network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.139089 containerd[1452]: time="2026-01-17T00:50:37.139047944Z" level=error msg="encountered an error cleaning up failed sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.141279 containerd[1452]: time="2026-01-17T00:50:37.141242542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6b7bfc9b-vp5zs,Uid:4861c4dc-4420-41d7-806f-ea096c9baa96,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.141632 containerd[1452]: time="2026-01-17T00:50:37.140955843Z" level=error msg="Failed to destroy network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.142226 containerd[1452]: time="2026-01-17T00:50:37.142152322Z" level=error msg="Failed to destroy network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.143307 containerd[1452]: time="2026-01-17T00:50:37.142971217Z" level=error msg="encountered an error cleaning up failed sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.143307 containerd[1452]: time="2026-01-17T00:50:37.143030067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c79dcf7c7-p9s7n,Uid:a676821e-dbbe-4544-a442-1cc84fb0d568,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.143307 containerd[1452]: time="2026-01-17T00:50:37.143159379Z" level=error msg="encountered an error cleaning up failed sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.143307 containerd[1452]: time="2026-01-17T00:50:37.143195085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gm8m8,Uid:00ae415f-67f7-4e67-a9d9-4d68f93ea018,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.182436 kubelet[2496]: E0117 00:50:37.182158 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.182436 kubelet[2496]: E0117 00:50:37.182260 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gm8m8" Jan 17 00:50:37.182436 kubelet[2496]: E0117 00:50:37.182283 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gm8m8" Jan 17 00:50:37.184525 kubelet[2496]: E0117 00:50:37.182377 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gm8m8_kube-system(00ae415f-67f7-4e67-a9d9-4d68f93ea018)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gm8m8_kube-system(00ae415f-67f7-4e67-a9d9-4d68f93ea018)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gm8m8" podUID="00ae415f-67f7-4e67-a9d9-4d68f93ea018" Jan 17 00:50:37.184525 kubelet[2496]: E0117 00:50:37.182482 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.184525 kubelet[2496]: E0117 00:50:37.182523 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" Jan 17 00:50:37.184818 kubelet[2496]: E0117 00:50:37.182542 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" Jan 17 00:50:37.184818 kubelet[2496]: E0117 00:50:37.182583 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b6b7bfc9b-vp5zs_calico-system(4861c4dc-4420-41d7-806f-ea096c9baa96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b6b7bfc9b-vp5zs_calico-system(4861c4dc-4420-41d7-806f-ea096c9baa96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" podUID="4861c4dc-4420-41d7-806f-ea096c9baa96" Jan 17 00:50:37.184818 kubelet[2496]: E0117 00:50:37.182618 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.185083 kubelet[2496]: E0117 00:50:37.182636 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c79dcf7c7-p9s7n" Jan 17 00:50:37.185083 kubelet[2496]: E0117 00:50:37.182652 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c79dcf7c7-p9s7n" Jan 17 00:50:37.185083 kubelet[2496]: E0117 00:50:37.182799 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c79dcf7c7-p9s7n_calico-system(a676821e-dbbe-4544-a442-1cc84fb0d568)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c79dcf7c7-p9s7n_calico-system(a676821e-dbbe-4544-a442-1cc84fb0d568)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c79dcf7c7-p9s7n" podUID="a676821e-dbbe-4544-a442-1cc84fb0d568" Jan 17 00:50:37.213560 containerd[1452]: time="2026-01-17T00:50:37.213370676Z" level=error msg="Failed to destroy network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.214116 containerd[1452]: time="2026-01-17T00:50:37.213901435Z" level=error msg="Failed to destroy network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.215530 containerd[1452]: time="2026-01-17T00:50:37.214843901Z" level=error msg="encountered an error cleaning up failed sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.215530 containerd[1452]: time="2026-01-17T00:50:37.214910955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-d7ft6,Uid:d9a48e4c-2642-431f-9b1f-b247428bfac1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.216494 kubelet[2496]: E0117 00:50:37.215114 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.216494 kubelet[2496]: E0117 00:50:37.215162 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" Jan 17 00:50:37.216494 kubelet[2496]: E0117 00:50:37.215184 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" Jan 17 00:50:37.216644 kubelet[2496]: E0117 00:50:37.215233 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59cdfd4dfb-d7ft6_calico-apiserver(d9a48e4c-2642-431f-9b1f-b247428bfac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59cdfd4dfb-d7ft6_calico-apiserver(d9a48e4c-2642-431f-9b1f-b247428bfac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" podUID="d9a48e4c-2642-431f-9b1f-b247428bfac1" Jan 17 00:50:37.255082 containerd[1452]: time="2026-01-17T00:50:37.254893843Z" level=error msg="Failed to destroy network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.255540 containerd[1452]: time="2026-01-17T00:50:37.255479524Z" level=error msg="encountered an error cleaning up failed sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.255999 containerd[1452]: time="2026-01-17T00:50:37.255559082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-nd9rl,Uid:e797004f-4966-4738-8311-6962046bba3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.257873 kubelet[2496]: E0117 00:50:37.255875 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.257873 kubelet[2496]: E0117 00:50:37.255924 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" Jan 17 00:50:37.257873 kubelet[2496]: E0117 00:50:37.255942 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" Jan 17 00:50:37.258004 kubelet[2496]: E0117 00:50:37.255990 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59cdfd4dfb-nd9rl_calico-apiserver(e797004f-4966-4738-8311-6962046bba3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59cdfd4dfb-nd9rl_calico-apiserver(e797004f-4966-4738-8311-6962046bba3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" podUID="e797004f-4966-4738-8311-6962046bba3a" Jan 17 00:50:37.264595 containerd[1452]: time="2026-01-17T00:50:37.264377573Z" level=error msg="Failed to destroy network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.265179 containerd[1452]: time="2026-01-17T00:50:37.265102404Z" level=error msg="encountered an error cleaning up failed sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.275394 containerd[1452]: time="2026-01-17T00:50:37.275240530Z" level=error msg="encountered an error cleaning up failed sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.293359 containerd[1452]: time="2026-01-17T00:50:37.293093719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-s7ntg,Uid:fff518d5-06d5-4f2e-9a9a-f374cb758607,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.293359 containerd[1452]: time="2026-01-17T00:50:37.293097649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvm5c,Uid:c965ee07-9ebc-4401-bd94-6f4cb9cb8928,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.293912 kubelet[2496]: E0117 00:50:37.293813 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.293912 kubelet[2496]: E0117 00:50:37.293903 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-s7ntg" Jan 17 00:50:37.294058 kubelet[2496]: E0117 00:50:37.293928 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-s7ntg" Jan 17 00:50:37.294058 kubelet[2496]: E0117 00:50:37.294015 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-s7ntg_calico-system(fff518d5-06d5-4f2e-9a9a-f374cb758607)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-s7ntg_calico-system(fff518d5-06d5-4f2e-9a9a-f374cb758607)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-s7ntg" podUID="fff518d5-06d5-4f2e-9a9a-f374cb758607" Jan 17 00:50:37.294208 kubelet[2496]: E0117 00:50:37.294098 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.294208 kubelet[2496]: E0117 00:50:37.294161 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dvm5c" Jan 17 00:50:37.294208 kubelet[2496]: E0117 00:50:37.294185 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dvm5c" Jan 17 00:50:37.294317 kubelet[2496]: E0117 00:50:37.294286 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dvm5c_kube-system(c965ee07-9ebc-4401-bd94-6f4cb9cb8928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dvm5c_kube-system(c965ee07-9ebc-4401-bd94-6f4cb9cb8928)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dvm5c" podUID="c965ee07-9ebc-4401-bd94-6f4cb9cb8928" Jan 17 00:50:37.578913 kubelet[2496]: I0117 00:50:37.578537 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:50:37.582800 kubelet[2496]: I0117 00:50:37.582628 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:50:37.586144 kubelet[2496]: I0117 00:50:37.586040 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:50:37.589140 kubelet[2496]: I0117 00:50:37.588785 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:50:37.609572 containerd[1452]: time="2026-01-17T00:50:37.609537389Z" level=info msg="StopPodSandbox for \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\"" Jan 17 00:50:37.610226 containerd[1452]: time="2026-01-17T00:50:37.610063524Z" level=info msg="StopPodSandbox for \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\"" Jan 17 00:50:37.610833 containerd[1452]: time="2026-01-17T00:50:37.610614675Z" level=info msg="StopPodSandbox for \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\"" Jan 17 00:50:37.611610 containerd[1452]: time="2026-01-17T00:50:37.611528626Z" level=info msg="StopPodSandbox for \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\"" Jan 17 00:50:37.612796 containerd[1452]: time="2026-01-17T00:50:37.612613108Z" level=info msg="Ensure that sandbox a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a in task-service has been cleanup successfully" Jan 17 00:50:37.612796 containerd[1452]: time="2026-01-17T00:50:37.612641987Z" level=info msg="Ensure that sandbox 2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7 in task-service has been cleanup successfully" Jan 17 00:50:37.612878 containerd[1452]: time="2026-01-17T00:50:37.612668901Z" level=info msg="Ensure that sandbox f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05 in task-service has been cleanup successfully" Jan 17 00:50:37.614154 kubelet[2496]: E0117 00:50:37.614029 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:37.615206 containerd[1452]: time="2026-01-17T00:50:37.612644650Z" level=info msg="Ensure that sandbox e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d in task-service has been cleanup successfully" Jan 17 00:50:37.629110 containerd[1452]: time="2026-01-17T00:50:37.629023582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:50:37.633528 kubelet[2496]: I0117 00:50:37.632178 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:50:37.637514 containerd[1452]: time="2026-01-17T00:50:37.637020402Z" level=info msg="StopPodSandbox for \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\"" Jan 17 00:50:37.637514 containerd[1452]: time="2026-01-17T00:50:37.637226165Z" level=info msg="Ensure that sandbox 1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78 in task-service has been cleanup successfully" Jan 17 00:50:37.649614 kubelet[2496]: I0117 00:50:37.649534 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:50:37.652312 containerd[1452]: time="2026-01-17T00:50:37.651324283Z" level=info msg="StopPodSandbox for \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\"" Jan 17 00:50:37.652312 containerd[1452]: time="2026-01-17T00:50:37.651540085Z" level=info msg="Ensure that sandbox 50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92 in task-service has been cleanup successfully" Jan 17 00:50:37.659775 kubelet[2496]: I0117 00:50:37.659537 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:50:37.660328 containerd[1452]: time="2026-01-17T00:50:37.660289709Z" level=info msg="StopPodSandbox for \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\"" Jan 17 00:50:37.660663 containerd[1452]: time="2026-01-17T00:50:37.660510951Z" level=info msg="Ensure that sandbox a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b in task-service has been cleanup successfully" Jan 17 00:50:37.726002 containerd[1452]: time="2026-01-17T00:50:37.725900237Z" level=error msg="StopPodSandbox for \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\" failed" error="failed to destroy network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.726768 kubelet[2496]: E0117 00:50:37.726506 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:50:37.726768 kubelet[2496]: E0117 00:50:37.726561 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d"} Jan 17 00:50:37.726768 kubelet[2496]: E0117 00:50:37.726618 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a676821e-dbbe-4544-a442-1cc84fb0d568\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:37.726768 kubelet[2496]: E0117 00:50:37.726643 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a676821e-dbbe-4544-a442-1cc84fb0d568\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c79dcf7c7-p9s7n" podUID="a676821e-dbbe-4544-a442-1cc84fb0d568" Jan 17 00:50:37.735794 containerd[1452]: time="2026-01-17T00:50:37.735641259Z" level=error msg="StopPodSandbox for \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\" failed" error="failed to destroy network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.736621 kubelet[2496]: E0117 00:50:37.736413 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:50:37.736863 kubelet[2496]: E0117 00:50:37.736805 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7"} Jan 17 00:50:37.737180 kubelet[2496]: E0117 00:50:37.737078 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c965ee07-9ebc-4401-bd94-6f4cb9cb8928\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:37.737276 kubelet[2496]: E0117 00:50:37.737240 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c965ee07-9ebc-4401-bd94-6f4cb9cb8928\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dvm5c" podUID="c965ee07-9ebc-4401-bd94-6f4cb9cb8928" Jan 17 00:50:37.738141 containerd[1452]: time="2026-01-17T00:50:37.738115403Z" level=error msg="StopPodSandbox for \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\" failed" error="failed to destroy network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.738398 kubelet[2496]: E0117 00:50:37.738361 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:50:37.738497 kubelet[2496]: E0117 00:50:37.738482 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05"} Jan 17 00:50:37.738564 kubelet[2496]: E0117 00:50:37.738551 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00ae415f-67f7-4e67-a9d9-4d68f93ea018\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:37.738816 kubelet[2496]: E0117 00:50:37.738793 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00ae415f-67f7-4e67-a9d9-4d68f93ea018\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gm8m8" podUID="00ae415f-67f7-4e67-a9d9-4d68f93ea018" Jan 17 00:50:37.739126 containerd[1452]: time="2026-01-17T00:50:37.739042275Z" level=error msg="StopPodSandbox for \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\" failed" error="failed to destroy network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.739379 kubelet[2496]: E0117 00:50:37.739303 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:50:37.739379 kubelet[2496]: E0117 00:50:37.739354 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a"} Jan 17 00:50:37.739379 kubelet[2496]: E0117 00:50:37.739372 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e797004f-4966-4738-8311-6962046bba3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:37.739563 kubelet[2496]: E0117 00:50:37.739392 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e797004f-4966-4738-8311-6962046bba3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" podUID="e797004f-4966-4738-8311-6962046bba3a" Jan 17 00:50:37.740842 containerd[1452]: time="2026-01-17T00:50:37.740757224Z" level=error msg="StopPodSandbox for \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\" failed" error="failed to destroy network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.741031 kubelet[2496]: E0117 00:50:37.740961 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:50:37.741031 kubelet[2496]: E0117 00:50:37.741004 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92"} Jan 17 00:50:37.741523 kubelet[2496]: E0117 00:50:37.741032 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fff518d5-06d5-4f2e-9a9a-f374cb758607\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:37.741523 kubelet[2496]: E0117 00:50:37.741057 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fff518d5-06d5-4f2e-9a9a-f374cb758607\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-s7ntg" podUID="fff518d5-06d5-4f2e-9a9a-f374cb758607" Jan 17 00:50:37.741523 kubelet[2496]: E0117 00:50:37.741391 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:50:37.741523 kubelet[2496]: E0117 00:50:37.741414 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78"} Jan 17 00:50:37.741814 containerd[1452]: time="2026-01-17T00:50:37.741220917Z" level=error msg="StopPodSandbox for \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\" failed" error="failed to destroy network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.741839 kubelet[2496]: E0117 00:50:37.741435 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a48e4c-2642-431f-9b1f-b247428bfac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:37.741839 kubelet[2496]: E0117 00:50:37.741453 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a48e4c-2642-431f-9b1f-b247428bfac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" podUID="d9a48e4c-2642-431f-9b1f-b247428bfac1" Jan 17 00:50:37.760176 containerd[1452]: time="2026-01-17T00:50:37.760113189Z" level=error msg="StopPodSandbox for \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\" failed" error="failed to destroy network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:37.760627 kubelet[2496]: E0117 00:50:37.760560 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:50:37.760779 kubelet[2496]: E0117 00:50:37.760627 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b"} Jan 17 00:50:37.760779 kubelet[2496]: E0117 00:50:37.760648 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4861c4dc-4420-41d7-806f-ea096c9baa96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:37.760779 kubelet[2496]: E0117 00:50:37.760668 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4861c4dc-4420-41d7-806f-ea096c9baa96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" podUID="4861c4dc-4420-41d7-806f-ea096c9baa96" Jan 17 00:50:37.782040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d-shm.mount: Deactivated successfully. Jan 17 00:50:37.782191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b-shm.mount: Deactivated successfully. Jan 17 00:50:38.452100 systemd[1]: Created slice kubepods-besteffort-pod4022344e_59ba_4aec_9ee8_9c1779407c17.slice - libcontainer container kubepods-besteffort-pod4022344e_59ba_4aec_9ee8_9c1779407c17.slice. Jan 17 00:50:38.458400 containerd[1452]: time="2026-01-17T00:50:38.458243591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pldn,Uid:4022344e-59ba-4aec-9ee8-9c1779407c17,Namespace:calico-system,Attempt:0,}" Jan 17 00:50:38.545057 containerd[1452]: time="2026-01-17T00:50:38.544884884Z" level=error msg="Failed to destroy network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:38.545639 containerd[1452]: time="2026-01-17T00:50:38.545543240Z" level=error msg="encountered an error cleaning up failed sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:38.545794 containerd[1452]: time="2026-01-17T00:50:38.545635643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pldn,Uid:4022344e-59ba-4aec-9ee8-9c1779407c17,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:38.546095 kubelet[2496]: E0117 00:50:38.546017 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:38.546147 kubelet[2496]: E0117 00:50:38.546111 2496 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pldn" Jan 17 00:50:38.546147 kubelet[2496]: E0117 00:50:38.546136 2496 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pldn" Jan 17 00:50:38.546473 kubelet[2496]: E0117 00:50:38.546195 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8pldn_calico-system(4022344e-59ba-4aec-9ee8-9c1779407c17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8pldn_calico-system(4022344e-59ba-4aec-9ee8-9c1779407c17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:38.548610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592-shm.mount: Deactivated successfully. Jan 17 00:50:38.663653 kubelet[2496]: I0117 00:50:38.663475 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:50:38.664637 containerd[1452]: time="2026-01-17T00:50:38.664470803Z" level=info msg="StopPodSandbox for \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\"" Jan 17 00:50:38.665042 containerd[1452]: time="2026-01-17T00:50:38.664833248Z" level=info msg="Ensure that sandbox 802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592 in task-service has been cleanup successfully" Jan 17 00:50:38.700946 containerd[1452]: time="2026-01-17T00:50:38.700765424Z" level=error msg="StopPodSandbox for \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\" failed" error="failed to destroy network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:50:38.701175 kubelet[2496]: E0117 00:50:38.701104 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:50:38.701245 kubelet[2496]: E0117 00:50:38.701188 2496 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592"} Jan 17 00:50:38.701245 kubelet[2496]: E0117 00:50:38.701232 2496 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4022344e-59ba-4aec-9ee8-9c1779407c17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:50:38.701415 kubelet[2496]: E0117 00:50:38.701265 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4022344e-59ba-4aec-9ee8-9c1779407c17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:44.983809 kubelet[2496]: I0117 00:50:44.982294 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:50:44.983809 kubelet[2496]: E0117 00:50:44.982948 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:45.659013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249898582.mount: Deactivated successfully. Jan 17 00:50:45.687577 kubelet[2496]: E0117 00:50:45.687523 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:45.779876 containerd[1452]: time="2026-01-17T00:50:45.779813587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:45.781013 containerd[1452]: time="2026-01-17T00:50:45.780944366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:50:45.782949 containerd[1452]: time="2026-01-17T00:50:45.782887533Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:45.785817 containerd[1452]: time="2026-01-17T00:50:45.785773985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:50:45.786449 containerd[1452]: time="2026-01-17T00:50:45.786393443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.157311761s" Jan 17 00:50:45.786495 containerd[1452]: time="2026-01-17T00:50:45.786460328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:50:45.803068 containerd[1452]: time="2026-01-17T00:50:45.802986408Z" level=info msg="CreateContainer within sandbox \"e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:50:45.827792 containerd[1452]: time="2026-01-17T00:50:45.827558761Z" level=info msg="CreateContainer within sandbox \"e5438058493c2641867f34be87f351c3c1f2de6309dd4025f08bbcca645f13da\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cadf990b93be8ee9654a7cb59b6494b6d55f85eaee666a2e2c47b37a51fd6f46\"" Jan 17 00:50:45.830248 containerd[1452]: time="2026-01-17T00:50:45.830179581Z" level=info msg="StartContainer for \"cadf990b93be8ee9654a7cb59b6494b6d55f85eaee666a2e2c47b37a51fd6f46\"" Jan 17 00:50:45.894891 systemd[1]: Started cri-containerd-cadf990b93be8ee9654a7cb59b6494b6d55f85eaee666a2e2c47b37a51fd6f46.scope - libcontainer container cadf990b93be8ee9654a7cb59b6494b6d55f85eaee666a2e2c47b37a51fd6f46. Jan 17 00:50:45.940230 containerd[1452]: time="2026-01-17T00:50:45.940063317Z" level=info msg="StartContainer for \"cadf990b93be8ee9654a7cb59b6494b6d55f85eaee666a2e2c47b37a51fd6f46\" returns successfully" Jan 17 00:50:46.049477 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:50:46.050058 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:50:46.193813 containerd[1452]: time="2026-01-17T00:50:46.193491690Z" level=info msg="StopPodSandbox for \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\"" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.288 [INFO][3741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.290 [INFO][3741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" iface="eth0" netns="/var/run/netns/cni-ddda7af8-0e8b-0f73-b03f-317e43f0a2e8" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.291 [INFO][3741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" iface="eth0" netns="/var/run/netns/cni-ddda7af8-0e8b-0f73-b03f-317e43f0a2e8" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.292 [INFO][3741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" iface="eth0" netns="/var/run/netns/cni-ddda7af8-0e8b-0f73-b03f-317e43f0a2e8" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.292 [INFO][3741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.292 [INFO][3741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.395 [INFO][3756] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.396 [INFO][3756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.397 [INFO][3756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.407 [WARNING][3756] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.407 [INFO][3756] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.409 [INFO][3756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:46.417003 containerd[1452]: 2026-01-17 00:50:46.412 [INFO][3741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:50:46.417878 containerd[1452]: time="2026-01-17T00:50:46.417631362Z" level=info msg="TearDown network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\" successfully" Jan 17 00:50:46.417878 containerd[1452]: time="2026-01-17T00:50:46.417757286Z" level=info msg="StopPodSandbox for \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\" returns successfully" Jan 17 00:50:46.451605 kubelet[2496]: I0117 00:50:46.451134 2496 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-backend-key-pair\") pod \"a676821e-dbbe-4544-a442-1cc84fb0d568\" (UID: \"a676821e-dbbe-4544-a442-1cc84fb0d568\") " Jan 17 00:50:46.451605 kubelet[2496]: I0117 00:50:46.451240 2496 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-ca-bundle\") pod \"a676821e-dbbe-4544-a442-1cc84fb0d568\" (UID: \"a676821e-dbbe-4544-a442-1cc84fb0d568\") " Jan 17 00:50:46.452155 kubelet[2496]: I0117 00:50:46.451898 2496 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vph2s\" (UniqueName: \"kubernetes.io/projected/a676821e-dbbe-4544-a442-1cc84fb0d568-kube-api-access-vph2s\") pod \"a676821e-dbbe-4544-a442-1cc84fb0d568\" (UID: \"a676821e-dbbe-4544-a442-1cc84fb0d568\") " Jan 17 00:50:46.452553 kubelet[2496]: I0117 00:50:46.452493 2496 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a676821e-dbbe-4544-a442-1cc84fb0d568" (UID: "a676821e-dbbe-4544-a442-1cc84fb0d568"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:50:46.457072 kubelet[2496]: I0117 00:50:46.456989 2496 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a676821e-dbbe-4544-a442-1cc84fb0d568" (UID: "a676821e-dbbe-4544-a442-1cc84fb0d568"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:50:46.458059 kubelet[2496]: I0117 00:50:46.457872 2496 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a676821e-dbbe-4544-a442-1cc84fb0d568-kube-api-access-vph2s" (OuterVolumeSpecName: "kube-api-access-vph2s") pod "a676821e-dbbe-4544-a442-1cc84fb0d568" (UID: "a676821e-dbbe-4544-a442-1cc84fb0d568"). InnerVolumeSpecName "kube-api-access-vph2s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:50:46.553126 kubelet[2496]: I0117 00:50:46.552997 2496 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 17 00:50:46.553126 kubelet[2496]: I0117 00:50:46.553083 2496 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a676821e-dbbe-4544-a442-1cc84fb0d568-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 17 00:50:46.553126 kubelet[2496]: I0117 00:50:46.553099 2496 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vph2s\" (UniqueName: \"kubernetes.io/projected/a676821e-dbbe-4544-a442-1cc84fb0d568-kube-api-access-vph2s\") on node \"localhost\" DevicePath \"\"" Jan 17 00:50:46.661234 systemd[1]: run-netns-cni\x2dddda7af8\x2d0e8b\x2d0f73\x2db03f\x2d317e43f0a2e8.mount: Deactivated successfully. Jan 17 00:50:46.661426 systemd[1]: var-lib-kubelet-pods-a676821e\x2ddbbe\x2d4544\x2da442\x2d1cc84fb0d568-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvph2s.mount: Deactivated successfully. Jan 17 00:50:46.661534 systemd[1]: var-lib-kubelet-pods-a676821e\x2ddbbe\x2d4544\x2da442\x2d1cc84fb0d568-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:50:46.695792 kubelet[2496]: E0117 00:50:46.694254 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:46.701307 systemd[1]: Removed slice kubepods-besteffort-poda676821e_dbbe_4544_a442_1cc84fb0d568.slice - libcontainer container kubepods-besteffort-poda676821e_dbbe_4544_a442_1cc84fb0d568.slice. Jan 17 00:50:46.714981 kubelet[2496]: I0117 00:50:46.714863 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jk6xz" podStartSLOduration=1.582251034 podStartE2EDuration="16.714846386s" podCreationTimestamp="2026-01-17 00:50:30 +0000 UTC" firstStartedPulling="2026-01-17 00:50:30.654888071 +0000 UTC m=+25.356700947" lastFinishedPulling="2026-01-17 00:50:45.787483424 +0000 UTC m=+40.489296299" observedRunningTime="2026-01-17 00:50:46.714005668 +0000 UTC m=+41.415818564" watchObservedRunningTime="2026-01-17 00:50:46.714846386 +0000 UTC m=+41.416659272" Jan 17 00:50:46.790567 systemd[1]: Created slice kubepods-besteffort-pode26a3e55_fb3a_4994_957c_83980e4edeb6.slice - libcontainer container kubepods-besteffort-pode26a3e55_fb3a_4994_957c_83980e4edeb6.slice. Jan 17 00:50:46.857210 kubelet[2496]: I0117 00:50:46.857131 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e26a3e55-fb3a-4994-957c-83980e4edeb6-whisker-backend-key-pair\") pod \"whisker-bf56495c7-svn2v\" (UID: \"e26a3e55-fb3a-4994-957c-83980e4edeb6\") " pod="calico-system/whisker-bf56495c7-svn2v" Jan 17 00:50:46.857210 kubelet[2496]: I0117 00:50:46.857201 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26a3e55-fb3a-4994-957c-83980e4edeb6-whisker-ca-bundle\") pod \"whisker-bf56495c7-svn2v\" (UID: \"e26a3e55-fb3a-4994-957c-83980e4edeb6\") " pod="calico-system/whisker-bf56495c7-svn2v" Jan 17 00:50:46.857533 kubelet[2496]: I0117 00:50:46.857226 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29n2b\" (UniqueName: \"kubernetes.io/projected/e26a3e55-fb3a-4994-957c-83980e4edeb6-kube-api-access-29n2b\") pod \"whisker-bf56495c7-svn2v\" (UID: \"e26a3e55-fb3a-4994-957c-83980e4edeb6\") " pod="calico-system/whisker-bf56495c7-svn2v" Jan 17 00:50:47.101936 containerd[1452]: time="2026-01-17T00:50:47.101875875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf56495c7-svn2v,Uid:e26a3e55-fb3a-4994-957c-83980e4edeb6,Namespace:calico-system,Attempt:0,}" Jan 17 00:50:47.372799 systemd-networkd[1386]: cali092d79052e2: Link UP Jan 17 00:50:47.373441 systemd-networkd[1386]: cali092d79052e2: Gained carrier Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.262 [INFO][3780] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.275 [INFO][3780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bf56495c7--svn2v-eth0 whisker-bf56495c7- calico-system e26a3e55-fb3a-4994-957c-83980e4edeb6 923 0 2026-01-17 00:50:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bf56495c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bf56495c7-svn2v eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali092d79052e2 [] [] }} ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.275 [INFO][3780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-eth0" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.313 [INFO][3795] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" HandleID="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Workload="localhost-k8s-whisker--bf56495c7--svn2v-eth0" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.313 [INFO][3795] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" HandleID="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Workload="localhost-k8s-whisker--bf56495c7--svn2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000523ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bf56495c7-svn2v", "timestamp":"2026-01-17 00:50:47.313186236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.313 [INFO][3795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.313 [INFO][3795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.313 [INFO][3795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.324 [INFO][3795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.332 [INFO][3795] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.338 [INFO][3795] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.341 [INFO][3795] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.344 [INFO][3795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.344 [INFO][3795] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.346 [INFO][3795] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.351 [INFO][3795] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.358 [INFO][3795] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.358 [INFO][3795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" host="localhost" Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.358 [INFO][3795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:47.391745 containerd[1452]: 2026-01-17 00:50:47.358 [INFO][3795] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" HandleID="k8s-pod-network.fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Workload="localhost-k8s-whisker--bf56495c7--svn2v-eth0" Jan 17 00:50:47.393287 containerd[1452]: 2026-01-17 00:50:47.362 [INFO][3780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bf56495c7--svn2v-eth0", GenerateName:"whisker-bf56495c7-", Namespace:"calico-system", SelfLink:"", UID:"e26a3e55-fb3a-4994-957c-83980e4edeb6", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bf56495c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bf56495c7-svn2v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali092d79052e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:47.393287 containerd[1452]: 2026-01-17 00:50:47.362 [INFO][3780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-eth0" Jan 17 00:50:47.393287 containerd[1452]: 2026-01-17 00:50:47.362 [INFO][3780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali092d79052e2 ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-eth0" Jan 17 00:50:47.393287 containerd[1452]: 2026-01-17 00:50:47.373 [INFO][3780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-eth0" Jan 17 00:50:47.393287 containerd[1452]: 2026-01-17 00:50:47.375 [INFO][3780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bf56495c7--svn2v-eth0", GenerateName:"whisker-bf56495c7-", Namespace:"calico-system", SelfLink:"", UID:"e26a3e55-fb3a-4994-957c-83980e4edeb6", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bf56495c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d", Pod:"whisker-bf56495c7-svn2v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali092d79052e2", MAC:"12:82:98:aa:6f:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:47.393287 containerd[1452]: 2026-01-17 00:50:47.388 [INFO][3780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d" Namespace="calico-system" Pod="whisker-bf56495c7-svn2v" WorkloadEndpoint="localhost-k8s-whisker--bf56495c7--svn2v-eth0" Jan 17 00:50:47.437520 containerd[1452]: time="2026-01-17T00:50:47.437089037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:47.437520 containerd[1452]: time="2026-01-17T00:50:47.437153807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:47.437520 containerd[1452]: time="2026-01-17T00:50:47.437171279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:47.437520 containerd[1452]: time="2026-01-17T00:50:47.437372335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:47.446513 kubelet[2496]: I0117 00:50:47.446326 2496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a676821e-dbbe-4544-a442-1cc84fb0d568" path="/var/lib/kubelet/pods/a676821e-dbbe-4544-a442-1cc84fb0d568/volumes" Jan 17 00:50:47.469068 systemd[1]: Started cri-containerd-fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d.scope - libcontainer container fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d. Jan 17 00:50:47.491187 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:47.546410 containerd[1452]: time="2026-01-17T00:50:47.546278970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf56495c7-svn2v,Uid:e26a3e55-fb3a-4994-957c-83980e4edeb6,Namespace:calico-system,Attempt:0,} returns sandbox id \"fae80d9fdb8c7bf22fb49e0d7da548182bc5cb3ba2253cff8285b44237d2959d\"" Jan 17 00:50:47.553278 containerd[1452]: time="2026-01-17T00:50:47.553082245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:50:47.642490 containerd[1452]: time="2026-01-17T00:50:47.642294287Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:47.661768 containerd[1452]: time="2026-01-17T00:50:47.645891005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:50:47.661768 containerd[1452]: time="2026-01-17T00:50:47.645979183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:50:47.662588 kubelet[2496]: E0117 00:50:47.662209 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:50:47.662588 kubelet[2496]: E0117 00:50:47.662254 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:50:47.662588 kubelet[2496]: E0117 00:50:47.662322 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bf56495c7-svn2v_calico-system(e26a3e55-fb3a-4994-957c-83980e4edeb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:47.665598 containerd[1452]: time="2026-01-17T00:50:47.665332999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:50:47.698047 kubelet[2496]: I0117 00:50:47.697979 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:50:47.698374 kubelet[2496]: E0117 00:50:47.698326 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:47.747348 containerd[1452]: time="2026-01-17T00:50:47.747183352Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:47.754052 containerd[1452]: time="2026-01-17T00:50:47.753811755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:50:47.754052 containerd[1452]: time="2026-01-17T00:50:47.753888237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:50:47.754933 kubelet[2496]: E0117 00:50:47.754439 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:50:47.754933 kubelet[2496]: E0117 00:50:47.754491 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:50:47.754933 kubelet[2496]: E0117 00:50:47.754566 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bf56495c7-svn2v_calico-system(e26a3e55-fb3a-4994-957c-83980e4edeb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:47.755056 kubelet[2496]: E0117 00:50:47.754612 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf56495c7-svn2v" podUID="e26a3e55-fb3a-4994-957c-83980e4edeb6" Jan 17 00:50:47.816812 kernel: bpftool[3979]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:50:48.157404 systemd-networkd[1386]: vxlan.calico: Link UP Jan 17 00:50:48.157413 systemd-networkd[1386]: vxlan.calico: Gained carrier Jan 17 00:50:48.703000 kubelet[2496]: E0117 00:50:48.702931 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf56495c7-svn2v" podUID="e26a3e55-fb3a-4994-957c-83980e4edeb6" Jan 17 00:50:48.811139 systemd-networkd[1386]: cali092d79052e2: Gained IPv6LL Jan 17 00:50:49.443321 containerd[1452]: time="2026-01-17T00:50:49.443157269Z" level=info msg="StopPodSandbox for \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\"" Jan 17 00:50:49.443321 containerd[1452]: time="2026-01-17T00:50:49.443196796Z" level=info msg="StopPodSandbox for \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\"" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.511 [INFO][4077] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.511 [INFO][4077] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" iface="eth0" netns="/var/run/netns/cni-c3daca5f-772c-7090-d1aa-0ad6853afd86" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.512 [INFO][4077] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" iface="eth0" netns="/var/run/netns/cni-c3daca5f-772c-7090-d1aa-0ad6853afd86" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.513 [INFO][4077] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" iface="eth0" netns="/var/run/netns/cni-c3daca5f-772c-7090-d1aa-0ad6853afd86" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.513 [INFO][4077] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.513 [INFO][4077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.553 [INFO][4095] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.553 [INFO][4095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.553 [INFO][4095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.561 [WARNING][4095] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.561 [INFO][4095] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.565 [INFO][4095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:49.572418 containerd[1452]: 2026-01-17 00:50:49.569 [INFO][4077] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:50:49.576340 containerd[1452]: time="2026-01-17T00:50:49.575936623Z" level=info msg="TearDown network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\" successfully" Jan 17 00:50:49.576340 containerd[1452]: time="2026-01-17T00:50:49.575976337Z" level=info msg="StopPodSandbox for \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\" returns successfully" Jan 17 00:50:49.576824 systemd[1]: run-netns-cni\x2dc3daca5f\x2d772c\x2d7090\x2dd1aa\x2d0ad6853afd86.mount: Deactivated successfully. Jan 17 00:50:49.582791 kubelet[2496]: E0117 00:50:49.582622 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:49.584093 containerd[1452]: time="2026-01-17T00:50:49.584005515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvm5c,Uid:c965ee07-9ebc-4401-bd94-6f4cb9cb8928,Namespace:kube-system,Attempt:1,}" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.508 [INFO][4075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.508 [INFO][4075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" iface="eth0" netns="/var/run/netns/cni-9cfe7474-6d86-f05b-95a8-9d55e3d5966b" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.509 [INFO][4075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" iface="eth0" netns="/var/run/netns/cni-9cfe7474-6d86-f05b-95a8-9d55e3d5966b" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.511 [INFO][4075] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" iface="eth0" netns="/var/run/netns/cni-9cfe7474-6d86-f05b-95a8-9d55e3d5966b" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.511 [INFO][4075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.511 [INFO][4075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.560 [INFO][4093] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.560 [INFO][4093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.565 [INFO][4093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.575 [WARNING][4093] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.575 [INFO][4093] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.579 [INFO][4093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:49.588389 containerd[1452]: 2026-01-17 00:50:49.583 [INFO][4075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:50:49.588389 containerd[1452]: time="2026-01-17T00:50:49.588318079Z" level=info msg="TearDown network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\" successfully" Jan 17 00:50:49.588389 containerd[1452]: time="2026-01-17T00:50:49.588345370Z" level=info msg="StopPodSandbox for \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\" returns successfully" Jan 17 00:50:49.592841 systemd[1]: run-netns-cni\x2d9cfe7474\x2d6d86\x2df05b\x2d95a8\x2d9d55e3d5966b.mount: Deactivated successfully. Jan 17 00:50:49.594808 containerd[1452]: time="2026-01-17T00:50:49.594645380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-d7ft6,Uid:d9a48e4c-2642-431f-9b1f-b247428bfac1,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:50:49.771107 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Jan 17 00:50:49.810572 systemd-networkd[1386]: cali6b21efce200: Link UP Jan 17 00:50:49.812204 systemd-networkd[1386]: cali6b21efce200: Gained carrier Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.682 [INFO][4122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0 calico-apiserver-59cdfd4dfb- calico-apiserver d9a48e4c-2642-431f-9b1f-b247428bfac1 947 0 2026-01-17 00:50:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59cdfd4dfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59cdfd4dfb-d7ft6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b21efce200 [] [] }} ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.682 [INFO][4122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.743 [INFO][4141] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" HandleID="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.744 [INFO][4141] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" HandleID="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59cdfd4dfb-d7ft6", "timestamp":"2026-01-17 00:50:49.74306424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.744 [INFO][4141] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.744 [INFO][4141] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.744 [INFO][4141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.757 [INFO][4141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.765 [INFO][4141] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.774 [INFO][4141] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.782 [INFO][4141] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.787 [INFO][4141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.787 [INFO][4141] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.790 [INFO][4141] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.796 [INFO][4141] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.803 [INFO][4141] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.803 [INFO][4141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" host="localhost" Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.803 [INFO][4141] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:49.829567 containerd[1452]: 2026-01-17 00:50:49.803 [INFO][4141] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" HandleID="k8s-pod-network.6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.830439 containerd[1452]: 2026-01-17 00:50:49.807 [INFO][4122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9a48e4c-2642-431f-9b1f-b247428bfac1", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59cdfd4dfb-d7ft6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b21efce200", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:49.830439 containerd[1452]: 2026-01-17 00:50:49.807 [INFO][4122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.830439 containerd[1452]: 2026-01-17 00:50:49.807 [INFO][4122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b21efce200 ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.830439 containerd[1452]: 2026-01-17 00:50:49.813 [INFO][4122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.830439 containerd[1452]: 2026-01-17 00:50:49.814 [INFO][4122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9a48e4c-2642-431f-9b1f-b247428bfac1", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda", Pod:"calico-apiserver-59cdfd4dfb-d7ft6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b21efce200", MAC:"aa:17:3a:a5:43:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:49.830439 containerd[1452]: 2026-01-17 00:50:49.825 [INFO][4122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-d7ft6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:50:49.864580 containerd[1452]: time="2026-01-17T00:50:49.863198307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:49.864580 containerd[1452]: time="2026-01-17T00:50:49.863292523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:49.864580 containerd[1452]: time="2026-01-17T00:50:49.863348477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:49.864580 containerd[1452]: time="2026-01-17T00:50:49.863469062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:49.872973 systemd[1]: Started sshd@7-10.0.0.159:22-10.0.0.1:55502.service - OpenSSH per-connection server daemon (10.0.0.1:55502). Jan 17 00:50:49.920835 systemd[1]: Started cri-containerd-6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda.scope - libcontainer container 6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda. Jan 17 00:50:49.968869 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:50:49.967501 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:50:49.977276 systemd-logind[1432]: New session 8 of user core. Jan 17 00:50:49.982033 systemd-networkd[1386]: cali720defa8108: Link UP Jan 17 00:50:49.982971 systemd-networkd[1386]: cali720defa8108: Gained carrier Jan 17 00:50:49.983025 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:50:49.994642 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.689 [INFO][4111] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dvm5c-eth0 coredns-66bc5c9577- kube-system c965ee07-9ebc-4401-bd94-6f4cb9cb8928 948 0 2026-01-17 00:50:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dvm5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali720defa8108 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.689 [INFO][4111] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.752 [INFO][4143] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" HandleID="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.752 [INFO][4143] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" HandleID="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000523e60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dvm5c", "timestamp":"2026-01-17 00:50:49.752293546 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.753 [INFO][4143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.803 [INFO][4143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.803 [INFO][4143] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.858 [INFO][4143] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.869 [INFO][4143] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.880 [INFO][4143] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.886 [INFO][4143] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.896 [INFO][4143] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.896 [INFO][4143] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.905 [INFO][4143] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9 Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.939 [INFO][4143] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.959 [INFO][4143] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.959 [INFO][4143] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" host="localhost" Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.959 [INFO][4143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:50.006069 containerd[1452]: 2026-01-17 00:50:49.959 [INFO][4143] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" HandleID="k8s-pod-network.2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:50.006642 containerd[1452]: 2026-01-17 00:50:49.973 [INFO][4111] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dvm5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c965ee07-9ebc-4401-bd94-6f4cb9cb8928", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dvm5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali720defa8108", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:50.006642 containerd[1452]: 2026-01-17 00:50:49.973 [INFO][4111] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:50.006642 containerd[1452]: 2026-01-17 00:50:49.973 [INFO][4111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali720defa8108 ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:50.006642 containerd[1452]: 2026-01-17 00:50:49.984 [INFO][4111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:50.006642 containerd[1452]: 2026-01-17 00:50:49.986 [INFO][4111] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dvm5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c965ee07-9ebc-4401-bd94-6f4cb9cb8928", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9", Pod:"coredns-66bc5c9577-dvm5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali720defa8108", MAC:"3a:3e:38:5a:34:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:50.006642 containerd[1452]: 2026-01-17 00:50:49.998 [INFO][4111] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9" Namespace="kube-system" Pod="coredns-66bc5c9577-dvm5c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:50:50.044090 containerd[1452]: time="2026-01-17T00:50:50.043472108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:50.044090 containerd[1452]: time="2026-01-17T00:50:50.043540767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:50.044090 containerd[1452]: time="2026-01-17T00:50:50.043559020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:50.044090 containerd[1452]: time="2026-01-17T00:50:50.043783359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:50.044090 containerd[1452]: time="2026-01-17T00:50:50.044055306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-d7ft6,Uid:d9a48e4c-2642-431f-9b1f-b247428bfac1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda\"" Jan 17 00:50:50.054117 containerd[1452]: time="2026-01-17T00:50:50.054054179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:50:50.088932 systemd[1]: Started cri-containerd-2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9.scope - libcontainer container 2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9. Jan 17 00:50:50.107464 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:50.122424 containerd[1452]: time="2026-01-17T00:50:50.121085175Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:50.122424 containerd[1452]: time="2026-01-17T00:50:50.122394007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:50:50.122570 containerd[1452]: time="2026-01-17T00:50:50.122531053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:50:50.123025 kubelet[2496]: E0117 00:50:50.122992 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:50:50.124838 kubelet[2496]: E0117 00:50:50.123820 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:50:50.125680 kubelet[2496]: E0117 00:50:50.125472 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59cdfd4dfb-d7ft6_calico-apiserver(d9a48e4c-2642-431f-9b1f-b247428bfac1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:50.125680 kubelet[2496]: E0117 00:50:50.125512 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" podUID="d9a48e4c-2642-431f-9b1f-b247428bfac1" Jan 17 00:50:50.152347 containerd[1452]: time="2026-01-17T00:50:50.152217756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvm5c,Uid:c965ee07-9ebc-4401-bd94-6f4cb9cb8928,Namespace:kube-system,Attempt:1,} returns sandbox id \"2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9\"" Jan 17 00:50:50.153516 kubelet[2496]: E0117 00:50:50.153292 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:50.170603 containerd[1452]: time="2026-01-17T00:50:50.170457045Z" level=info msg="CreateContainer within sandbox \"2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:50:50.198609 containerd[1452]: time="2026-01-17T00:50:50.198259719Z" level=info msg="CreateContainer within sandbox \"2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91d6e212f22f0d183daf1daaac1c42c1287ef0fd4dbad076a24c5031dc157224\"" Jan 17 00:50:50.199551 containerd[1452]: time="2026-01-17T00:50:50.199465604Z" level=info msg="StartContainer for \"91d6e212f22f0d183daf1daaac1c42c1287ef0fd4dbad076a24c5031dc157224\"" Jan 17 00:50:50.213346 sshd[4182]: pam_unix(sshd:session): session closed for user core Jan 17 00:50:50.217863 systemd[1]: sshd@7-10.0.0.159:22-10.0.0.1:55502.service: Deactivated successfully. Jan 17 00:50:50.220063 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:50:50.221299 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:50:50.223359 systemd-logind[1432]: Removed session 8. Jan 17 00:50:50.244288 systemd[1]: Started cri-containerd-91d6e212f22f0d183daf1daaac1c42c1287ef0fd4dbad076a24c5031dc157224.scope - libcontainer container 91d6e212f22f0d183daf1daaac1c42c1287ef0fd4dbad076a24c5031dc157224. Jan 17 00:50:50.286214 containerd[1452]: time="2026-01-17T00:50:50.286181968Z" level=info msg="StartContainer for \"91d6e212f22f0d183daf1daaac1c42c1287ef0fd4dbad076a24c5031dc157224\" returns successfully" Jan 17 00:50:50.444502 containerd[1452]: time="2026-01-17T00:50:50.444178135Z" level=info msg="StopPodSandbox for \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\"" Jan 17 00:50:50.444502 containerd[1452]: time="2026-01-17T00:50:50.444203434Z" level=info msg="StopPodSandbox for \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\"" Jan 17 00:50:50.446257 containerd[1452]: time="2026-01-17T00:50:50.444183206Z" level=info msg="StopPodSandbox for \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\"" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.529 [INFO][4349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.531 [INFO][4349] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" iface="eth0" netns="/var/run/netns/cni-aa8488fc-d9a9-8ffd-d8de-609c1ffa5e8d" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.532 [INFO][4349] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" iface="eth0" netns="/var/run/netns/cni-aa8488fc-d9a9-8ffd-d8de-609c1ffa5e8d" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.532 [INFO][4349] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" iface="eth0" netns="/var/run/netns/cni-aa8488fc-d9a9-8ffd-d8de-609c1ffa5e8d" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.532 [INFO][4349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.532 [INFO][4349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.572 [INFO][4379] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.575 [INFO][4379] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.575 [INFO][4379] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.586 [WARNING][4379] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.587 [INFO][4379] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.589 [INFO][4379] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:50.606767 containerd[1452]: 2026-01-17 00:50:50.596 [INFO][4349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:50:50.606767 containerd[1452]: time="2026-01-17T00:50:50.601854091Z" level=info msg="TearDown network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\" successfully" Jan 17 00:50:50.606767 containerd[1452]: time="2026-01-17T00:50:50.601879809Z" level=info msg="StopPodSandbox for \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\" returns successfully" Jan 17 00:50:50.606237 systemd[1]: run-netns-cni\x2daa8488fc\x2dd9a9\x2d8ffd\x2dd8de\x2d609c1ffa5e8d.mount: Deactivated successfully. Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.520 [INFO][4350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.521 [INFO][4350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" iface="eth0" netns="/var/run/netns/cni-5fb0e0ea-64c3-96ae-4710-1051c0803b10" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.522 [INFO][4350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" iface="eth0" netns="/var/run/netns/cni-5fb0e0ea-64c3-96ae-4710-1051c0803b10" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.523 [INFO][4350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" iface="eth0" netns="/var/run/netns/cni-5fb0e0ea-64c3-96ae-4710-1051c0803b10" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.523 [INFO][4350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.523 [INFO][4350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.583 [INFO][4373] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.585 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.589 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.598 [WARNING][4373] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.598 [INFO][4373] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.600 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:50.610022 containerd[1452]: 2026-01-17 00:50:50.606 [INFO][4350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:50:50.610481 containerd[1452]: time="2026-01-17T00:50:50.610333659Z" level=info msg="TearDown network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\" successfully" Jan 17 00:50:50.610511 containerd[1452]: time="2026-01-17T00:50:50.610365048Z" level=info msg="StopPodSandbox for \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\" returns successfully" Jan 17 00:50:50.613175 systemd[1]: run-netns-cni\x2d5fb0e0ea\x2d64c3\x2d96ae\x2d4710\x2d1051c0803b10.mount: Deactivated successfully. Jan 17 00:50:50.617560 containerd[1452]: time="2026-01-17T00:50:50.617243659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-s7ntg,Uid:fff518d5-06d5-4f2e-9a9a-f374cb758607,Namespace:calico-system,Attempt:1,}" Jan 17 00:50:50.621587 containerd[1452]: time="2026-01-17T00:50:50.621483569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-nd9rl,Uid:e797004f-4966-4738-8311-6962046bba3a,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.552 [INFO][4351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.552 [INFO][4351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" iface="eth0" netns="/var/run/netns/cni-0bd32e7c-2ef3-81ed-c56f-ac8da90f4951" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.552 [INFO][4351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" iface="eth0" netns="/var/run/netns/cni-0bd32e7c-2ef3-81ed-c56f-ac8da90f4951" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.553 [INFO][4351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" iface="eth0" netns="/var/run/netns/cni-0bd32e7c-2ef3-81ed-c56f-ac8da90f4951" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.553 [INFO][4351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.553 [INFO][4351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.605 [INFO][4386] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.607 [INFO][4386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.607 [INFO][4386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.615 [WARNING][4386] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.615 [INFO][4386] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.617 [INFO][4386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:50.627414 containerd[1452]: 2026-01-17 00:50:50.623 [INFO][4351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:50:50.627978 containerd[1452]: time="2026-01-17T00:50:50.627744669Z" level=info msg="TearDown network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\" successfully" Jan 17 00:50:50.627978 containerd[1452]: time="2026-01-17T00:50:50.627762262Z" level=info msg="StopPodSandbox for \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\" returns successfully" Jan 17 00:50:50.630283 systemd[1]: run-netns-cni\x2d0bd32e7c\x2d2ef3\x2d81ed\x2dc56f\x2dac8da90f4951.mount: Deactivated successfully. Jan 17 00:50:50.632944 kubelet[2496]: E0117 00:50:50.632895 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:50.634090 containerd[1452]: time="2026-01-17T00:50:50.633937142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gm8m8,Uid:00ae415f-67f7-4e67-a9d9-4d68f93ea018,Namespace:kube-system,Attempt:1,}" Jan 17 00:50:50.710368 kubelet[2496]: E0117 00:50:50.710084 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:50.721211 kubelet[2496]: E0117 00:50:50.721088 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" podUID="d9a48e4c-2642-431f-9b1f-b247428bfac1" Jan 17 00:50:50.758025 kubelet[2496]: I0117 00:50:50.757763 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dvm5c" podStartSLOduration=38.757632445 podStartE2EDuration="38.757632445s" podCreationTimestamp="2026-01-17 00:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:50:50.736522666 +0000 UTC m=+45.438335543" watchObservedRunningTime="2026-01-17 00:50:50.757632445 +0000 UTC m=+45.459445321" Jan 17 00:50:50.858993 systemd-networkd[1386]: cali6b21efce200: Gained IPv6LL Jan 17 00:50:50.890403 systemd-networkd[1386]: caliedcdd0d4063: Link UP Jan 17 00:50:50.891772 systemd-networkd[1386]: caliedcdd0d4063: Gained carrier Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.744 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--gm8m8-eth0 coredns-66bc5c9577- kube-system 00ae415f-67f7-4e67-a9d9-4d68f93ea018 1002 0 2026-01-17 00:50:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-gm8m8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliedcdd0d4063 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.745 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.812 [INFO][4446] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" HandleID="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.815 [INFO][4446] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" HandleID="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004aeed0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-gm8m8", "timestamp":"2026-01-17 00:50:50.812928601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.815 [INFO][4446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.819 [INFO][4446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.819 [INFO][4446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.834 [INFO][4446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.841 [INFO][4446] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.848 [INFO][4446] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.857 [INFO][4446] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.862 [INFO][4446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.862 [INFO][4446] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.864 [INFO][4446] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544 Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.869 [INFO][4446] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.877 [INFO][4446] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.877 [INFO][4446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" host="localhost" Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.878 [INFO][4446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:50.912307 containerd[1452]: 2026-01-17 00:50:50.879 [INFO][4446] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" HandleID="k8s-pod-network.c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.914933 containerd[1452]: 2026-01-17 00:50:50.885 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gm8m8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"00ae415f-67f7-4e67-a9d9-4d68f93ea018", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-gm8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedcdd0d4063", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:50.914933 containerd[1452]: 2026-01-17 00:50:50.885 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.914933 containerd[1452]: 2026-01-17 00:50:50.886 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedcdd0d4063 ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.914933 containerd[1452]: 2026-01-17 00:50:50.892 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.914933 containerd[1452]: 2026-01-17 00:50:50.893 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gm8m8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"00ae415f-67f7-4e67-a9d9-4d68f93ea018", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544", Pod:"coredns-66bc5c9577-gm8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedcdd0d4063", MAC:"42:5e:9e:a0:fd:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:50.914933 containerd[1452]: 2026-01-17 00:50:50.907 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544" Namespace="kube-system" Pod="coredns-66bc5c9577-gm8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:50:50.957315 containerd[1452]: time="2026-01-17T00:50:50.956992613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:50.958328 containerd[1452]: time="2026-01-17T00:50:50.958190248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:50.958328 containerd[1452]: time="2026-01-17T00:50:50.958228199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:50.960000 containerd[1452]: time="2026-01-17T00:50:50.958336009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:50.989036 systemd[1]: Started cri-containerd-c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544.scope - libcontainer container c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544. Jan 17 00:50:51.003122 systemd-networkd[1386]: calie50565285e3: Link UP Jan 17 00:50:51.004951 systemd-networkd[1386]: calie50565285e3: Gained carrier Jan 17 00:50:51.011532 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.717 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--s7ntg-eth0 goldmane-7c778bb748- calico-system fff518d5-06d5-4f2e-9a9a-f374cb758607 1001 0 2026-01-17 00:50:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-s7ntg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie50565285e3 [] [] }} ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.718 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.822 [INFO][4438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" HandleID="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.824 [INFO][4438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" HandleID="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002def00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-s7ntg", "timestamp":"2026-01-17 00:50:50.822401252 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.824 [INFO][4438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.878 [INFO][4438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.878 [INFO][4438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.935 [INFO][4438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.943 [INFO][4438] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.955 [INFO][4438] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.962 [INFO][4438] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.966 [INFO][4438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.966 [INFO][4438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.968 [INFO][4438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368 Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.973 [INFO][4438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.981 [INFO][4438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.982 [INFO][4438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" host="localhost" Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.982 [INFO][4438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:51.026110 containerd[1452]: 2026-01-17 00:50:50.982 [INFO][4438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" HandleID="k8s-pod-network.deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:51.026989 containerd[1452]: 2026-01-17 00:50:50.988 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--s7ntg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fff518d5-06d5-4f2e-9a9a-f374cb758607", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-s7ntg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie50565285e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:51.026989 containerd[1452]: 2026-01-17 00:50:50.989 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:51.026989 containerd[1452]: 2026-01-17 00:50:50.989 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie50565285e3 ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:51.026989 containerd[1452]: 2026-01-17 00:50:51.006 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:51.026989 containerd[1452]: 2026-01-17 00:50:51.006 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--s7ntg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fff518d5-06d5-4f2e-9a9a-f374cb758607", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368", Pod:"goldmane-7c778bb748-s7ntg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie50565285e3", MAC:"f2:4e:5b:41:75:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:51.026989 containerd[1452]: 2026-01-17 00:50:51.020 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368" Namespace="calico-system" Pod="goldmane-7c778bb748-s7ntg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:50:51.054913 containerd[1452]: time="2026-01-17T00:50:51.054462576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gm8m8,Uid:00ae415f-67f7-4e67-a9d9-4d68f93ea018,Namespace:kube-system,Attempt:1,} returns sandbox id \"c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544\"" Jan 17 00:50:51.058058 kubelet[2496]: E0117 00:50:51.057939 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:51.065385 containerd[1452]: time="2026-01-17T00:50:51.065287330Z" level=info msg="CreateContainer within sandbox \"c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:50:51.086496 containerd[1452]: time="2026-01-17T00:50:51.086229840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:51.086496 containerd[1452]: time="2026-01-17T00:50:51.086405878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:51.087416 containerd[1452]: time="2026-01-17T00:50:51.086432287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:51.087416 containerd[1452]: time="2026-01-17T00:50:51.086615238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:51.092418 containerd[1452]: time="2026-01-17T00:50:51.092142961Z" level=info msg="CreateContainer within sandbox \"c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f27e18f192c6d5962c859a2ae9b7f02b948c38fa0bbf998fbb3de6095bac6de\"" Jan 17 00:50:51.094796 containerd[1452]: time="2026-01-17T00:50:51.094091833Z" level=info msg="StartContainer for \"2f27e18f192c6d5962c859a2ae9b7f02b948c38fa0bbf998fbb3de6095bac6de\"" Jan 17 00:50:51.125011 systemd-networkd[1386]: calie6685fdb1e0: Link UP Jan 17 00:50:51.125554 systemd-networkd[1386]: calie6685fdb1e0: Gained carrier Jan 17 00:50:51.126947 systemd[1]: Started cri-containerd-deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368.scope - libcontainer container deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368. Jan 17 00:50:51.154572 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:50.734 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0 calico-apiserver-59cdfd4dfb- calico-apiserver e797004f-4966-4738-8311-6962046bba3a 1000 0 2026-01-17 00:50:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59cdfd4dfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59cdfd4dfb-nd9rl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie6685fdb1e0 [] [] }} ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:50.737 [INFO][4407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:50.826 [INFO][4452] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" HandleID="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:50.826 [INFO][4452] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" HandleID="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59cdfd4dfb-nd9rl", "timestamp":"2026-01-17 00:50:50.826635752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:50.826 [INFO][4452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:50.982 [INFO][4452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:50.983 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.036 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.052 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.068 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.076 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.081 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.081 [INFO][4452] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.084 [INFO][4452] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603 Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.093 [INFO][4452] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.105 [INFO][4452] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.105 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" host="localhost" Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.106 [INFO][4452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:51.173200 containerd[1452]: 2026-01-17 00:50:51.106 [INFO][4452] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" HandleID="k8s-pod-network.8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:51.174040 containerd[1452]: 2026-01-17 00:50:51.115 [INFO][4407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e797004f-4966-4738-8311-6962046bba3a", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59cdfd4dfb-nd9rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6685fdb1e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:51.174040 containerd[1452]: 2026-01-17 00:50:51.115 [INFO][4407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:51.174040 containerd[1452]: 2026-01-17 00:50:51.115 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6685fdb1e0 ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:51.174040 containerd[1452]: 2026-01-17 00:50:51.129 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:51.174040 containerd[1452]: 2026-01-17 00:50:51.135 [INFO][4407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e797004f-4966-4738-8311-6962046bba3a", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603", Pod:"calico-apiserver-59cdfd4dfb-nd9rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6685fdb1e0", MAC:"c2:ff:f9:be:6a:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:51.174040 containerd[1452]: 2026-01-17 00:50:51.155 [INFO][4407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603" Namespace="calico-apiserver" Pod="calico-apiserver-59cdfd4dfb-nd9rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:50:51.186052 systemd[1]: Started cri-containerd-2f27e18f192c6d5962c859a2ae9b7f02b948c38fa0bbf998fbb3de6095bac6de.scope - libcontainer container 2f27e18f192c6d5962c859a2ae9b7f02b948c38fa0bbf998fbb3de6095bac6de. Jan 17 00:50:51.210412 containerd[1452]: time="2026-01-17T00:50:51.210021567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:51.210412 containerd[1452]: time="2026-01-17T00:50:51.210269741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:51.213311 containerd[1452]: time="2026-01-17T00:50:51.211971816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:51.213311 containerd[1452]: time="2026-01-17T00:50:51.213031943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:51.227014 containerd[1452]: time="2026-01-17T00:50:51.226303162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-s7ntg,Uid:fff518d5-06d5-4f2e-9a9a-f374cb758607,Namespace:calico-system,Attempt:1,} returns sandbox id \"deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368\"" Jan 17 00:50:51.231430 containerd[1452]: time="2026-01-17T00:50:51.231196450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:50:51.254924 systemd[1]: Started cri-containerd-8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603.scope - libcontainer container 8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603. Jan 17 00:50:51.262535 containerd[1452]: time="2026-01-17T00:50:51.261881663Z" level=info msg="StartContainer for \"2f27e18f192c6d5962c859a2ae9b7f02b948c38fa0bbf998fbb3de6095bac6de\" returns successfully" Jan 17 00:50:51.288877 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:51.305068 containerd[1452]: time="2026-01-17T00:50:51.304918190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:51.308120 containerd[1452]: time="2026-01-17T00:50:51.308052026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:50:51.308264 containerd[1452]: time="2026-01-17T00:50:51.308155309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:50:51.308453 kubelet[2496]: E0117 00:50:51.308389 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:50:51.308925 kubelet[2496]: E0117 00:50:51.308459 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:50:51.308925 kubelet[2496]: E0117 00:50:51.308536 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-s7ntg_calico-system(fff518d5-06d5-4f2e-9a9a-f374cb758607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:51.308925 kubelet[2496]: E0117 00:50:51.308572 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-s7ntg" podUID="fff518d5-06d5-4f2e-9a9a-f374cb758607" Jan 17 00:50:51.332553 containerd[1452]: time="2026-01-17T00:50:51.332472002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cdfd4dfb-nd9rl,Uid:e797004f-4966-4738-8311-6962046bba3a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603\"" Jan 17 00:50:51.337014 containerd[1452]: time="2026-01-17T00:50:51.336119207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:50:51.399973 containerd[1452]: time="2026-01-17T00:50:51.399870916Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:51.401052 containerd[1452]: time="2026-01-17T00:50:51.400980556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:50:51.401125 containerd[1452]: time="2026-01-17T00:50:51.401068410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:50:51.401384 kubelet[2496]: E0117 00:50:51.401218 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:50:51.401384 kubelet[2496]: E0117 00:50:51.401261 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:50:51.401384 kubelet[2496]: E0117 00:50:51.401336 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59cdfd4dfb-nd9rl_calico-apiserver(e797004f-4966-4738-8311-6962046bba3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:51.401384 kubelet[2496]: E0117 00:50:51.401371 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" podUID="e797004f-4966-4738-8311-6962046bba3a" Jan 17 00:50:51.443634 containerd[1452]: time="2026-01-17T00:50:51.443545626Z" level=info msg="StopPodSandbox for \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\"" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.511 [INFO][4675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.511 [INFO][4675] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" iface="eth0" netns="/var/run/netns/cni-b0771959-ed7c-6304-476d-22e8d3a0259f" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.511 [INFO][4675] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" iface="eth0" netns="/var/run/netns/cni-b0771959-ed7c-6304-476d-22e8d3a0259f" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.511 [INFO][4675] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" iface="eth0" netns="/var/run/netns/cni-b0771959-ed7c-6304-476d-22e8d3a0259f" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.511 [INFO][4675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.511 [INFO][4675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.542 [INFO][4684] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.543 [INFO][4684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.543 [INFO][4684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.552 [WARNING][4684] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.552 [INFO][4684] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.554 [INFO][4684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:51.561489 containerd[1452]: 2026-01-17 00:50:51.557 [INFO][4675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:50:51.562455 containerd[1452]: time="2026-01-17T00:50:51.561514671Z" level=info msg="TearDown network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\" successfully" Jan 17 00:50:51.562455 containerd[1452]: time="2026-01-17T00:50:51.561544617Z" level=info msg="StopPodSandbox for \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\" returns successfully" Jan 17 00:50:51.565806 containerd[1452]: time="2026-01-17T00:50:51.565773151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6b7bfc9b-vp5zs,Uid:4861c4dc-4420-41d7-806f-ea096c9baa96,Namespace:calico-system,Attempt:1,}" Jan 17 00:50:51.586550 systemd[1]: run-netns-cni\x2db0771959\x2ded7c\x2d6304\x2d476d\x2d22e8d3a0259f.mount: Deactivated successfully. Jan 17 00:50:51.729285 systemd-networkd[1386]: cali418e01150d9: Link UP Jan 17 00:50:51.735118 kubelet[2496]: E0117 00:50:51.734118 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:51.737083 systemd-networkd[1386]: cali418e01150d9: Gained carrier Jan 17 00:50:51.757640 kubelet[2496]: E0117 00:50:51.757525 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-s7ntg" podUID="fff518d5-06d5-4f2e-9a9a-f374cb758607" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.625 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0 calico-kube-controllers-7b6b7bfc9b- calico-system 4861c4dc-4420-41d7-806f-ea096c9baa96 1047 0 2026-01-17 00:50:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b6b7bfc9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b6b7bfc9b-vp5zs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali418e01150d9 [] [] }} ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.625 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.669 [INFO][4708] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" HandleID="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.669 [INFO][4708] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" HandleID="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005069b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b6b7bfc9b-vp5zs", "timestamp":"2026-01-17 00:50:51.669372425 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.669 [INFO][4708] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.669 [INFO][4708] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.669 [INFO][4708] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.679 [INFO][4708] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.689 [INFO][4708] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.696 [INFO][4708] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.699 [INFO][4708] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.702 [INFO][4708] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.703 [INFO][4708] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.705 [INFO][4708] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.711 [INFO][4708] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.718 [INFO][4708] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.718 [INFO][4708] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" host="localhost" Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.718 [INFO][4708] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:51.762766 containerd[1452]: 2026-01-17 00:50:51.718 [INFO][4708] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" HandleID="k8s-pod-network.af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.763531 containerd[1452]: 2026-01-17 00:50:51.723 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0", GenerateName:"calico-kube-controllers-7b6b7bfc9b-", Namespace:"calico-system", SelfLink:"", UID:"4861c4dc-4420-41d7-806f-ea096c9baa96", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6b7bfc9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b6b7bfc9b-vp5zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali418e01150d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:51.763531 containerd[1452]: 2026-01-17 00:50:51.723 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.763531 containerd[1452]: 2026-01-17 00:50:51.723 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali418e01150d9 ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.763531 containerd[1452]: 2026-01-17 00:50:51.735 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.763531 containerd[1452]: 2026-01-17 00:50:51.735 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0", GenerateName:"calico-kube-controllers-7b6b7bfc9b-", Namespace:"calico-system", SelfLink:"", UID:"4861c4dc-4420-41d7-806f-ea096c9baa96", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6b7bfc9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf", Pod:"calico-kube-controllers-7b6b7bfc9b-vp5zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali418e01150d9", MAC:"f6:0d:51:77:36:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:51.763531 containerd[1452]: 2026-01-17 00:50:51.752 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf" Namespace="calico-system" Pod="calico-kube-controllers-7b6b7bfc9b-vp5zs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:50:51.766590 kubelet[2496]: E0117 00:50:51.765048 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:51.768408 kubelet[2496]: E0117 00:50:51.767631 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" podUID="e797004f-4966-4738-8311-6962046bba3a" Jan 17 00:50:51.768799 kubelet[2496]: E0117 00:50:51.767540 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" podUID="d9a48e4c-2642-431f-9b1f-b247428bfac1" Jan 17 00:50:51.807056 kubelet[2496]: I0117 00:50:51.806836 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gm8m8" podStartSLOduration=39.806819167 podStartE2EDuration="39.806819167s" podCreationTimestamp="2026-01-17 00:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:50:51.760614987 +0000 UTC m=+46.462427892" watchObservedRunningTime="2026-01-17 00:50:51.806819167 +0000 UTC m=+46.508632044" Jan 17 00:50:51.807418 containerd[1452]: time="2026-01-17T00:50:51.806349036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:51.808943 containerd[1452]: time="2026-01-17T00:50:51.808364696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:51.808943 containerd[1452]: time="2026-01-17T00:50:51.808521248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:51.809358 containerd[1452]: time="2026-01-17T00:50:51.809220353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:51.862345 systemd[1]: Started cri-containerd-af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf.scope - libcontainer container af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf. Jan 17 00:50:51.882943 systemd-networkd[1386]: cali720defa8108: Gained IPv6LL Jan 17 00:50:51.935173 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:51.974204 containerd[1452]: time="2026-01-17T00:50:51.974091902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6b7bfc9b-vp5zs,Uid:4861c4dc-4420-41d7-806f-ea096c9baa96,Namespace:calico-system,Attempt:1,} returns sandbox id \"af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf\"" Jan 17 00:50:51.976756 containerd[1452]: time="2026-01-17T00:50:51.976521410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:50:52.035403 containerd[1452]: time="2026-01-17T00:50:52.035218447Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:52.036442 containerd[1452]: time="2026-01-17T00:50:52.036330050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:50:52.036442 containerd[1452]: time="2026-01-17T00:50:52.036362813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:50:52.036595 kubelet[2496]: E0117 00:50:52.036546 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:50:52.036595 kubelet[2496]: E0117 00:50:52.036581 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:50:52.036773 kubelet[2496]: E0117 00:50:52.036641 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7b6b7bfc9b-vp5zs_calico-system(4861c4dc-4420-41d7-806f-ea096c9baa96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:52.036863 kubelet[2496]: E0117 00:50:52.036784 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" podUID="4861c4dc-4420-41d7-806f-ea096c9baa96" Jan 17 00:50:52.395108 systemd-networkd[1386]: calie50565285e3: Gained IPv6LL Jan 17 00:50:52.587082 systemd-networkd[1386]: caliedcdd0d4063: Gained IPv6LL Jan 17 00:50:52.652059 systemd-networkd[1386]: calie6685fdb1e0: Gained IPv6LL Jan 17 00:50:52.768870 kubelet[2496]: E0117 00:50:52.768776 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:52.769392 kubelet[2496]: E0117 00:50:52.768879 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:52.770899 kubelet[2496]: E0117 00:50:52.769603 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" podUID="e797004f-4966-4738-8311-6962046bba3a" Jan 17 00:50:52.770899 kubelet[2496]: E0117 00:50:52.769087 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-s7ntg" podUID="fff518d5-06d5-4f2e-9a9a-f374cb758607" Jan 17 00:50:52.770899 kubelet[2496]: E0117 00:50:52.770847 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" podUID="4861c4dc-4420-41d7-806f-ea096c9baa96" Jan 17 00:50:53.483044 systemd-networkd[1386]: cali418e01150d9: Gained IPv6LL Jan 17 00:50:53.771532 kubelet[2496]: E0117 00:50:53.771388 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:53.772158 kubelet[2496]: E0117 00:50:53.771965 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" podUID="4861c4dc-4420-41d7-806f-ea096c9baa96" Jan 17 00:50:54.444221 containerd[1452]: time="2026-01-17T00:50:54.443828863Z" level=info msg="StopPodSandbox for \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\"" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.501 [INFO][4787] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.502 [INFO][4787] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" iface="eth0" netns="/var/run/netns/cni-184bd0a0-9de3-0cbc-ec75-67f1eb03f157" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.504 [INFO][4787] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" iface="eth0" netns="/var/run/netns/cni-184bd0a0-9de3-0cbc-ec75-67f1eb03f157" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.506 [INFO][4787] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" iface="eth0" netns="/var/run/netns/cni-184bd0a0-9de3-0cbc-ec75-67f1eb03f157" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.506 [INFO][4787] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.506 [INFO][4787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.538 [INFO][4796] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.538 [INFO][4796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.538 [INFO][4796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.546 [WARNING][4796] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.547 [INFO][4796] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.549 [INFO][4796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:54.556421 containerd[1452]: 2026-01-17 00:50:54.552 [INFO][4787] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:50:54.557169 containerd[1452]: time="2026-01-17T00:50:54.556899824Z" level=info msg="TearDown network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\" successfully" Jan 17 00:50:54.557169 containerd[1452]: time="2026-01-17T00:50:54.556935571Z" level=info msg="StopPodSandbox for \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\" returns successfully" Jan 17 00:50:54.561197 systemd[1]: run-netns-cni\x2d184bd0a0\x2d9de3\x2d0cbc\x2dec75\x2d67f1eb03f157.mount: Deactivated successfully. Jan 17 00:50:54.568495 containerd[1452]: time="2026-01-17T00:50:54.568407941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pldn,Uid:4022344e-59ba-4aec-9ee8-9c1779407c17,Namespace:calico-system,Attempt:1,}" Jan 17 00:50:54.725895 systemd-networkd[1386]: calia13b9e76c09: Link UP Jan 17 00:50:54.726262 systemd-networkd[1386]: calia13b9e76c09: Gained carrier Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.636 [INFO][4805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8pldn-eth0 csi-node-driver- calico-system 4022344e-59ba-4aec-9ee8-9c1779407c17 1122 0 2026-01-17 00:50:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8pldn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia13b9e76c09 [] [] }} ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.636 [INFO][4805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.672 [INFO][4818] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" HandleID="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.672 [INFO][4818] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" HandleID="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8pldn", "timestamp":"2026-01-17 00:50:54.672121018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.672 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.672 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.672 [INFO][4818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.682 [INFO][4818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.689 [INFO][4818] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.695 [INFO][4818] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.698 [INFO][4818] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.701 [INFO][4818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.701 [INFO][4818] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.703 [INFO][4818] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948 Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.710 [INFO][4818] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.717 [INFO][4818] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.717 [INFO][4818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" host="localhost" Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.717 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:50:54.744410 containerd[1452]: 2026-01-17 00:50:54.717 [INFO][4818] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" HandleID="k8s-pod-network.942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.745467 containerd[1452]: 2026-01-17 00:50:54.721 [INFO][4805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8pldn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4022344e-59ba-4aec-9ee8-9c1779407c17", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8pldn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia13b9e76c09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:54.745467 containerd[1452]: 2026-01-17 00:50:54.722 [INFO][4805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.745467 containerd[1452]: 2026-01-17 00:50:54.722 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia13b9e76c09 ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.745467 containerd[1452]: 2026-01-17 00:50:54.725 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.745467 containerd[1452]: 2026-01-17 00:50:54.727 [INFO][4805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8pldn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4022344e-59ba-4aec-9ee8-9c1779407c17", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948", Pod:"csi-node-driver-8pldn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia13b9e76c09", MAC:"12:6a:8b:d1:ab:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:50:54.745467 containerd[1452]: 2026-01-17 00:50:54.739 [INFO][4805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948" Namespace="calico-system" Pod="csi-node-driver-8pldn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:50:54.775437 containerd[1452]: time="2026-01-17T00:50:54.775289145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:50:54.775437 containerd[1452]: time="2026-01-17T00:50:54.775378712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:50:54.775437 containerd[1452]: time="2026-01-17T00:50:54.775393490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:54.780582 containerd[1452]: time="2026-01-17T00:50:54.779976188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:50:54.822058 systemd[1]: Started cri-containerd-942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948.scope - libcontainer container 942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948. Jan 17 00:50:54.844159 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:50:54.870783 containerd[1452]: time="2026-01-17T00:50:54.870484124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pldn,Uid:4022344e-59ba-4aec-9ee8-9c1779407c17,Namespace:calico-system,Attempt:1,} returns sandbox id \"942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948\"" Jan 17 00:50:54.874094 containerd[1452]: time="2026-01-17T00:50:54.874008427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:50:54.934033 containerd[1452]: time="2026-01-17T00:50:54.933946703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:54.935481 containerd[1452]: time="2026-01-17T00:50:54.935343658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:50:54.935481 containerd[1452]: time="2026-01-17T00:50:54.935451380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:50:54.935856 kubelet[2496]: E0117 00:50:54.935638 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:50:54.935856 kubelet[2496]: E0117 00:50:54.935795 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:50:54.936339 kubelet[2496]: E0117 00:50:54.935865 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8pldn_calico-system(4022344e-59ba-4aec-9ee8-9c1779407c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:54.938288 containerd[1452]: time="2026-01-17T00:50:54.938122473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:50:55.000144 containerd[1452]: time="2026-01-17T00:50:54.999833268Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:50:55.001871 containerd[1452]: time="2026-01-17T00:50:55.001371488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:50:55.001871 containerd[1452]: time="2026-01-17T00:50:55.001469222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:50:55.002075 kubelet[2496]: E0117 00:50:55.001980 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:50:55.002435 kubelet[2496]: E0117 00:50:55.002163 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:50:55.002435 kubelet[2496]: E0117 00:50:55.002342 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8pldn_calico-system(4022344e-59ba-4aec-9ee8-9c1779407c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:50:55.002435 kubelet[2496]: E0117 00:50:55.002396 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:55.232841 systemd[1]: Started sshd@8-10.0.0.159:22-10.0.0.1:57254.service - OpenSSH per-connection server daemon (10.0.0.1:57254). Jan 17 00:50:55.301897 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 57254 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:50:55.304975 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:50:55.312105 systemd-logind[1432]: New session 9 of user core. Jan 17 00:50:55.329074 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:50:55.490123 sshd[4880]: pam_unix(sshd:session): session closed for user core Jan 17 00:50:55.496122 systemd[1]: sshd@8-10.0.0.159:22-10.0.0.1:57254.service: Deactivated successfully. Jan 17 00:50:55.499213 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:50:55.501855 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:50:55.504056 systemd-logind[1432]: Removed session 9. Jan 17 00:50:55.788643 kubelet[2496]: E0117 00:50:55.788521 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:55.979113 systemd-networkd[1386]: calia13b9e76c09: Gained IPv6LL Jan 17 00:50:56.791375 kubelet[2496]: E0117 00:50:56.791237 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:50:58.431028 kubelet[2496]: I0117 00:50:58.430935 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:50:58.432211 kubelet[2496]: E0117 00:50:58.431517 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:50:58.796433 kubelet[2496]: E0117 00:50:58.796229 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:51:00.507869 systemd[1]: Started sshd@9-10.0.0.159:22-10.0.0.1:57270.service - OpenSSH per-connection server daemon (10.0.0.1:57270). Jan 17 00:51:00.565342 sshd[4955]: Accepted publickey for core from 10.0.0.1 port 57270 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:00.567856 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:00.576502 systemd-logind[1432]: New session 10 of user core. Jan 17 00:51:00.584036 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:51:00.742038 sshd[4955]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:00.751842 systemd[1]: sshd@9-10.0.0.159:22-10.0.0.1:57270.service: Deactivated successfully. Jan 17 00:51:00.754830 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:51:00.757795 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:51:00.766063 systemd[1]: Started sshd@10-10.0.0.159:22-10.0.0.1:57276.service - OpenSSH per-connection server daemon (10.0.0.1:57276). Jan 17 00:51:00.767222 systemd-logind[1432]: Removed session 10. Jan 17 00:51:00.805398 sshd[4970]: Accepted publickey for core from 10.0.0.1 port 57276 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:00.807591 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:00.814172 systemd-logind[1432]: New session 11 of user core. Jan 17 00:51:00.828996 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:51:01.016548 sshd[4970]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:01.034111 systemd[1]: sshd@10-10.0.0.159:22-10.0.0.1:57276.service: Deactivated successfully. Jan 17 00:51:01.040937 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:51:01.046006 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:51:01.061345 systemd[1]: Started sshd@11-10.0.0.159:22-10.0.0.1:57282.service - OpenSSH per-connection server daemon (10.0.0.1:57282). Jan 17 00:51:01.064457 systemd-logind[1432]: Removed session 11. Jan 17 00:51:01.101990 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 57282 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:01.104263 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:01.110058 systemd-logind[1432]: New session 12 of user core. Jan 17 00:51:01.117915 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:51:01.274145 sshd[4989]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:01.278924 systemd[1]: sshd@11-10.0.0.159:22-10.0.0.1:57282.service: Deactivated successfully. Jan 17 00:51:01.281516 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:51:01.282916 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:51:01.284458 systemd-logind[1432]: Removed session 12. Jan 17 00:51:02.445335 containerd[1452]: time="2026-01-17T00:51:02.445128814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:51:02.522206 containerd[1452]: time="2026-01-17T00:51:02.522031109Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:02.523977 containerd[1452]: time="2026-01-17T00:51:02.523803550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:51:02.523977 containerd[1452]: time="2026-01-17T00:51:02.523873569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:51:02.524391 kubelet[2496]: E0117 00:51:02.524068 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:51:02.524391 kubelet[2496]: E0117 00:51:02.524251 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:51:02.524391 kubelet[2496]: E0117 00:51:02.524328 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bf56495c7-svn2v_calico-system(e26a3e55-fb3a-4994-957c-83980e4edeb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:02.526204 containerd[1452]: time="2026-01-17T00:51:02.525921045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:51:02.583332 containerd[1452]: time="2026-01-17T00:51:02.583244758Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:02.584539 containerd[1452]: time="2026-01-17T00:51:02.584443916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:51:02.584600 containerd[1452]: time="2026-01-17T00:51:02.584528344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:51:02.585138 kubelet[2496]: E0117 00:51:02.585002 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:51:02.585138 kubelet[2496]: E0117 00:51:02.585079 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:51:02.585256 kubelet[2496]: E0117 00:51:02.585152 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bf56495c7-svn2v_calico-system(e26a3e55-fb3a-4994-957c-83980e4edeb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:02.585256 kubelet[2496]: E0117 00:51:02.585199 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf56495c7-svn2v" podUID="e26a3e55-fb3a-4994-957c-83980e4edeb6" Jan 17 00:51:03.444479 containerd[1452]: time="2026-01-17T00:51:03.444415823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:51:03.511241 containerd[1452]: time="2026-01-17T00:51:03.511143234Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:03.512762 containerd[1452]: time="2026-01-17T00:51:03.512559084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:51:03.512837 containerd[1452]: time="2026-01-17T00:51:03.512599843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:51:03.513008 kubelet[2496]: E0117 00:51:03.512926 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:51:03.513008 kubelet[2496]: E0117 00:51:03.513000 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:51:03.513198 kubelet[2496]: E0117 00:51:03.513131 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-s7ntg_calico-system(fff518d5-06d5-4f2e-9a9a-f374cb758607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:03.513244 kubelet[2496]: E0117 00:51:03.513203 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-s7ntg" podUID="fff518d5-06d5-4f2e-9a9a-f374cb758607" Jan 17 00:51:05.424140 containerd[1452]: time="2026-01-17T00:51:05.424072122Z" level=info msg="StopPodSandbox for \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\"" Jan 17 00:51:05.445892 containerd[1452]: time="2026-01-17T00:51:05.445109194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:51:05.509348 containerd[1452]: time="2026-01-17T00:51:05.509254485Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:05.516110 containerd[1452]: time="2026-01-17T00:51:05.516022378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:51:05.518518 containerd[1452]: time="2026-01-17T00:51:05.516554808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:51:05.518518 containerd[1452]: time="2026-01-17T00:51:05.517791696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:51:05.520116 kubelet[2496]: E0117 00:51:05.516993 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:51:05.520116 kubelet[2496]: E0117 00:51:05.517033 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:51:05.520116 kubelet[2496]: E0117 00:51:05.517216 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59cdfd4dfb-d7ft6_calico-apiserver(d9a48e4c-2642-431f-9b1f-b247428bfac1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:05.520116 kubelet[2496]: E0117 00:51:05.518195 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" podUID="d9a48e4c-2642-431f-9b1f-b247428bfac1" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.479 [WARNING][5016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9a48e4c-2642-431f-9b1f-b247428bfac1", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda", Pod:"calico-apiserver-59cdfd4dfb-d7ft6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b21efce200", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.479 [INFO][5016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.479 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" iface="eth0" netns="" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.480 [INFO][5016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.480 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.508 [INFO][5027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.509 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.509 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.521 [WARNING][5027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.521 [INFO][5027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.523 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:05.529946 containerd[1452]: 2026-01-17 00:51:05.526 [INFO][5016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.530486 containerd[1452]: time="2026-01-17T00:51:05.530027214Z" level=info msg="TearDown network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\" successfully" Jan 17 00:51:05.530486 containerd[1452]: time="2026-01-17T00:51:05.530047732Z" level=info msg="StopPodSandbox for \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\" returns successfully" Jan 17 00:51:05.531238 containerd[1452]: time="2026-01-17T00:51:05.531029215Z" level=info msg="RemovePodSandbox for \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\"" Jan 17 00:51:05.533984 containerd[1452]: time="2026-01-17T00:51:05.533933376Z" level=info msg="Forcibly stopping sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\"" Jan 17 00:51:05.581501 containerd[1452]: time="2026-01-17T00:51:05.581438619Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:05.583096 containerd[1452]: time="2026-01-17T00:51:05.582984596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:51:05.583096 containerd[1452]: time="2026-01-17T00:51:05.583046692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:51:05.583414 kubelet[2496]: E0117 00:51:05.583251 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:51:05.583414 kubelet[2496]: E0117 00:51:05.583321 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:51:05.583414 kubelet[2496]: E0117 00:51:05.583400 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7b6b7bfc9b-vp5zs_calico-system(4861c4dc-4420-41d7-806f-ea096c9baa96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:05.583777 kubelet[2496]: E0117 00:51:05.583438 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" podUID="4861c4dc-4420-41d7-806f-ea096c9baa96" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.580 [WARNING][5045] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9a48e4c-2642-431f-9b1f-b247428bfac1", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6de4b315efca78de526594e81a8144105bd5d1841ce6328418083ab7eb83dbda", Pod:"calico-apiserver-59cdfd4dfb-d7ft6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b21efce200", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.580 [INFO][5045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.581 [INFO][5045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" iface="eth0" netns="" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.581 [INFO][5045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.581 [INFO][5045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.608 [INFO][5054] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.608 [INFO][5054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.608 [INFO][5054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.615 [WARNING][5054] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.615 [INFO][5054] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" HandleID="k8s-pod-network.1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--d7ft6-eth0" Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.617 [INFO][5054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:05.622743 containerd[1452]: 2026-01-17 00:51:05.619 [INFO][5045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78" Jan 17 00:51:05.623275 containerd[1452]: time="2026-01-17T00:51:05.622787232Z" level=info msg="TearDown network for sandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\" successfully" Jan 17 00:51:05.627811 containerd[1452]: time="2026-01-17T00:51:05.627764314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:05.627811 containerd[1452]: time="2026-01-17T00:51:05.627842840Z" level=info msg="RemovePodSandbox \"1625cd1bbd7e36d198eb8f1203413f084fbfde5c03a218ad046b356044fb6f78\" returns successfully" Jan 17 00:51:05.628485 containerd[1452]: time="2026-01-17T00:51:05.628451625Z" level=info msg="StopPodSandbox for \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\"" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.674 [WARNING][5071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0", GenerateName:"calico-kube-controllers-7b6b7bfc9b-", Namespace:"calico-system", SelfLink:"", UID:"4861c4dc-4420-41d7-806f-ea096c9baa96", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6b7bfc9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf", Pod:"calico-kube-controllers-7b6b7bfc9b-vp5zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali418e01150d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.675 [INFO][5071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.675 [INFO][5071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" iface="eth0" netns="" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.675 [INFO][5071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.675 [INFO][5071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.703 [INFO][5079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.703 [INFO][5079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.703 [INFO][5079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.711 [WARNING][5079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.711 [INFO][5079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.713 [INFO][5079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:05.719527 containerd[1452]: 2026-01-17 00:51:05.716 [INFO][5071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.719527 containerd[1452]: time="2026-01-17T00:51:05.719395314Z" level=info msg="TearDown network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\" successfully" Jan 17 00:51:05.719527 containerd[1452]: time="2026-01-17T00:51:05.719422584Z" level=info msg="StopPodSandbox for \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\" returns successfully" Jan 17 00:51:05.720298 containerd[1452]: time="2026-01-17T00:51:05.719961591Z" level=info msg="RemovePodSandbox for \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\"" Jan 17 00:51:05.720298 containerd[1452]: time="2026-01-17T00:51:05.719988141Z" level=info msg="Forcibly stopping sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\"" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.769 [WARNING][5098] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0", GenerateName:"calico-kube-controllers-7b6b7bfc9b-", Namespace:"calico-system", SelfLink:"", UID:"4861c4dc-4420-41d7-806f-ea096c9baa96", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6b7bfc9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af371d26f40fed665ac12ac3ec6e48f86755c702afefa8073f2d3754cfe68dbf", Pod:"calico-kube-controllers-7b6b7bfc9b-vp5zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali418e01150d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.770 [INFO][5098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.770 [INFO][5098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" iface="eth0" netns="" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.770 [INFO][5098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.770 [INFO][5098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.797 [INFO][5107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.797 [INFO][5107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.797 [INFO][5107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.804 [WARNING][5107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.804 [INFO][5107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" HandleID="k8s-pod-network.a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Workload="localhost-k8s-calico--kube--controllers--7b6b7bfc9b--vp5zs-eth0" Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.806 [INFO][5107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:05.812304 containerd[1452]: 2026-01-17 00:51:05.809 [INFO][5098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b" Jan 17 00:51:05.812304 containerd[1452]: time="2026-01-17T00:51:05.812291149Z" level=info msg="TearDown network for sandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\" successfully" Jan 17 00:51:05.817427 containerd[1452]: time="2026-01-17T00:51:05.817380014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:05.817493 containerd[1452]: time="2026-01-17T00:51:05.817443543Z" level=info msg="RemovePodSandbox \"a2239fbba04cc9ce4f11f1a3ce5f573c51973ecda873c5c49b63a153a200d96b\" returns successfully" Jan 17 00:51:05.818070 containerd[1452]: time="2026-01-17T00:51:05.817921584Z" level=info msg="StopPodSandbox for \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\"" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.855 [WARNING][5124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--s7ntg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fff518d5-06d5-4f2e-9a9a-f374cb758607", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368", Pod:"goldmane-7c778bb748-s7ntg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie50565285e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.855 [INFO][5124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.855 [INFO][5124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" iface="eth0" netns="" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.856 [INFO][5124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.856 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.882 [INFO][5133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.882 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.882 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.889 [WARNING][5133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.889 [INFO][5133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.891 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:05.896861 containerd[1452]: 2026-01-17 00:51:05.893 [INFO][5124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.897510 containerd[1452]: time="2026-01-17T00:51:05.897431048Z" level=info msg="TearDown network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\" successfully" Jan 17 00:51:05.897510 containerd[1452]: time="2026-01-17T00:51:05.897493956Z" level=info msg="StopPodSandbox for \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\" returns successfully" Jan 17 00:51:05.898320 containerd[1452]: time="2026-01-17T00:51:05.898277769Z" level=info msg="RemovePodSandbox for \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\"" Jan 17 00:51:05.898382 containerd[1452]: time="2026-01-17T00:51:05.898332482Z" level=info msg="Forcibly stopping sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\"" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.945 [WARNING][5150] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--s7ntg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fff518d5-06d5-4f2e-9a9a-f374cb758607", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deb36f0266c7c4ba593e5fb0f3fdd479c2c5fd1a7af1717460aee43635074368", Pod:"goldmane-7c778bb748-s7ntg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie50565285e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.945 [INFO][5150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.945 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" iface="eth0" netns="" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.945 [INFO][5150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.945 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.976 [INFO][5159] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.976 [INFO][5159] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.976 [INFO][5159] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.983 [WARNING][5159] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.983 [INFO][5159] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" HandleID="k8s-pod-network.50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Workload="localhost-k8s-goldmane--7c778bb748--s7ntg-eth0" Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.985 [INFO][5159] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:05.990924 containerd[1452]: 2026-01-17 00:51:05.988 [INFO][5150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92" Jan 17 00:51:05.991551 containerd[1452]: time="2026-01-17T00:51:05.990908232Z" level=info msg="TearDown network for sandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\" successfully" Jan 17 00:51:05.996008 containerd[1452]: time="2026-01-17T00:51:05.995911163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:05.996053 containerd[1452]: time="2026-01-17T00:51:05.996023463Z" level=info msg="RemovePodSandbox \"50be12087a48e14d96d3987c0911a4e68c36631a634592a73a46963fda526e92\" returns successfully" Jan 17 00:51:05.996684 containerd[1452]: time="2026-01-17T00:51:05.996630027Z" level=info msg="StopPodSandbox for \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\"" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.039 [WARNING][5178] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dvm5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c965ee07-9ebc-4401-bd94-6f4cb9cb8928", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9", Pod:"coredns-66bc5c9577-dvm5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali720defa8108", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.039 [INFO][5178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.039 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" iface="eth0" netns="" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.039 [INFO][5178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.039 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.067 [INFO][5187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.067 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.067 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.075 [WARNING][5187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.076 [INFO][5187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.078 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.084752 containerd[1452]: 2026-01-17 00:51:06.081 [INFO][5178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.085515 containerd[1452]: time="2026-01-17T00:51:06.084789659Z" level=info msg="TearDown network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\" successfully" Jan 17 00:51:06.085515 containerd[1452]: time="2026-01-17T00:51:06.084819866Z" level=info msg="StopPodSandbox for \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\" returns successfully" Jan 17 00:51:06.085515 containerd[1452]: time="2026-01-17T00:51:06.085337992Z" level=info msg="RemovePodSandbox for \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\"" Jan 17 00:51:06.085515 containerd[1452]: time="2026-01-17T00:51:06.085364602Z" level=info msg="Forcibly stopping sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\"" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.132 [WARNING][5205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dvm5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c965ee07-9ebc-4401-bd94-6f4cb9cb8928", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f34a41af24dcc393c165537ac7415ccaa4be35c52ebf19f4717ba08e9fe14a9", Pod:"coredns-66bc5c9577-dvm5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali720defa8108", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.132 [INFO][5205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.132 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" iface="eth0" netns="" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.132 [INFO][5205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.132 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.159 [INFO][5213] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.159 [INFO][5213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.159 [INFO][5213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.167 [WARNING][5213] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.167 [INFO][5213] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" HandleID="k8s-pod-network.2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Workload="localhost-k8s-coredns--66bc5c9577--dvm5c-eth0" Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.168 [INFO][5213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.173527 containerd[1452]: 2026-01-17 00:51:06.171 [INFO][5205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7" Jan 17 00:51:06.173527 containerd[1452]: time="2026-01-17T00:51:06.173467083Z" level=info msg="TearDown network for sandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\" successfully" Jan 17 00:51:06.178941 containerd[1452]: time="2026-01-17T00:51:06.178900936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:06.179114 containerd[1452]: time="2026-01-17T00:51:06.178954707Z" level=info msg="RemovePodSandbox \"2c12620890e2f5ca4eeef0a575d3fc909646c1134355c785ed04cef4daf3e1f7\" returns successfully" Jan 17 00:51:06.179508 containerd[1452]: time="2026-01-17T00:51:06.179460400Z" level=info msg="StopPodSandbox for \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\"" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.217 [WARNING][5230] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" WorkloadEndpoint="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.217 [INFO][5230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.217 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" iface="eth0" netns="" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.217 [INFO][5230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.217 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.249 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.249 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.249 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.256 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.256 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.258 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.263911 containerd[1452]: 2026-01-17 00:51:06.261 [INFO][5230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.263911 containerd[1452]: time="2026-01-17T00:51:06.263892986Z" level=info msg="TearDown network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\" successfully" Jan 17 00:51:06.264252 containerd[1452]: time="2026-01-17T00:51:06.263923403Z" level=info msg="StopPodSandbox for \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\" returns successfully" Jan 17 00:51:06.264802 containerd[1452]: time="2026-01-17T00:51:06.264601801Z" level=info msg="RemovePodSandbox for \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\"" Jan 17 00:51:06.264946 containerd[1452]: time="2026-01-17T00:51:06.264910046Z" level=info msg="Forcibly stopping sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\"" Jan 17 00:51:06.290398 systemd[1]: Started sshd@12-10.0.0.159:22-10.0.0.1:44868.service - OpenSSH per-connection server daemon (10.0.0.1:44868). Jan 17 00:51:06.358581 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 44868 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.312 [WARNING][5258] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" WorkloadEndpoint="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.312 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.312 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" iface="eth0" netns="" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.312 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.312 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.344 [INFO][5269] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.345 [INFO][5269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.345 [INFO][5269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.351 [WARNING][5269] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.351 [INFO][5269] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" HandleID="k8s-pod-network.e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Workload="localhost-k8s-whisker--7c79dcf7c7--p9s7n-eth0" Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.353 [INFO][5269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.360477 containerd[1452]: 2026-01-17 00:51:06.357 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d" Jan 17 00:51:06.360477 containerd[1452]: time="2026-01-17T00:51:06.360454293Z" level=info msg="TearDown network for sandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\" successfully" Jan 17 00:51:06.361028 sshd[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:06.367955 systemd-logind[1432]: New session 13 of user core. Jan 17 00:51:06.371987 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:51:06.372257 containerd[1452]: time="2026-01-17T00:51:06.372143483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:06.372257 containerd[1452]: time="2026-01-17T00:51:06.372194689Z" level=info msg="RemovePodSandbox \"e6ae1c1651e056920b6d76234470f406b01855d0d4eb5d53468d273d2887e44d\" returns successfully" Jan 17 00:51:06.372960 containerd[1452]: time="2026-01-17T00:51:06.372842667Z" level=info msg="StopPodSandbox for \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\"" Jan 17 00:51:06.449596 containerd[1452]: time="2026-01-17T00:51:06.449503789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.418 [WARNING][5290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8pldn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4022344e-59ba-4aec-9ee8-9c1779407c17", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948", Pod:"csi-node-driver-8pldn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia13b9e76c09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.418 [INFO][5290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.418 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" iface="eth0" netns="" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.418 [INFO][5290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.418 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.449 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.449 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.450 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.462 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.462 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.467 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.474395 containerd[1452]: 2026-01-17 00:51:06.470 [INFO][5290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.475151 containerd[1452]: time="2026-01-17T00:51:06.474396204Z" level=info msg="TearDown network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\" successfully" Jan 17 00:51:06.475151 containerd[1452]: time="2026-01-17T00:51:06.474428224Z" level=info msg="StopPodSandbox for \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\" returns successfully" Jan 17 00:51:06.475837 containerd[1452]: time="2026-01-17T00:51:06.475262402Z" level=info msg="RemovePodSandbox for \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\"" Jan 17 00:51:06.475837 containerd[1452]: time="2026-01-17T00:51:06.475292568Z" level=info msg="Forcibly stopping sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\"" Jan 17 00:51:06.521397 containerd[1452]: time="2026-01-17T00:51:06.520998966Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:06.524999 containerd[1452]: time="2026-01-17T00:51:06.523606455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:51:06.524999 containerd[1452]: time="2026-01-17T00:51:06.523812078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:51:06.528015 kubelet[2496]: E0117 00:51:06.527897 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:51:06.528015 kubelet[2496]: E0117 00:51:06.527968 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:51:06.528486 kubelet[2496]: E0117 00:51:06.528038 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59cdfd4dfb-nd9rl_calico-apiserver(e797004f-4966-4738-8311-6962046bba3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:06.528486 kubelet[2496]: E0117 00:51:06.528073 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" podUID="e797004f-4966-4738-8311-6962046bba3a" Jan 17 00:51:06.557947 sshd[5265]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:06.564476 systemd[1]: sshd@12-10.0.0.159:22-10.0.0.1:44868.service: Deactivated successfully. Jan 17 00:51:06.567196 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:51:06.569005 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:51:06.583160 systemd[1]: Started sshd@13-10.0.0.159:22-10.0.0.1:44870.service - OpenSSH per-connection server daemon (10.0.0.1:44870). Jan 17 00:51:06.585128 systemd-logind[1432]: Removed session 13. Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.527 [WARNING][5326] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8pldn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4022344e-59ba-4aec-9ee8-9c1779407c17", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"942e4da0c98418182fc23b8bb75f160e857bb883a0a9509250cf0ece1b2c6948", Pod:"csi-node-driver-8pldn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia13b9e76c09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.527 [INFO][5326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.528 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" iface="eth0" netns="" Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.528 [INFO][5326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.528 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.573 [INFO][5335] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.573 [INFO][5335] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.573 [INFO][5335] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.581 [WARNING][5335] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.581 [INFO][5335] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" HandleID="k8s-pod-network.802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Workload="localhost-k8s-csi--node--driver--8pldn-eth0" Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.584 [INFO][5335] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.590496 containerd[1452]: 2026-01-17 00:51:06.587 [INFO][5326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592" Jan 17 00:51:06.591058 containerd[1452]: time="2026-01-17T00:51:06.590547059Z" level=info msg="TearDown network for sandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\" successfully" Jan 17 00:51:06.597452 containerd[1452]: time="2026-01-17T00:51:06.597379365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:06.597530 containerd[1452]: time="2026-01-17T00:51:06.597468902Z" level=info msg="RemovePodSandbox \"802288c5792ba1da0bc145e4e827329438dd9275ea7bc9fca2450f6ee2bd1592\" returns successfully" Jan 17 00:51:06.598411 containerd[1452]: time="2026-01-17T00:51:06.598341249Z" level=info msg="StopPodSandbox for \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\"" Jan 17 00:51:06.618891 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 44870 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:06.621339 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:06.628961 systemd-logind[1432]: New session 14 of user core. Jan 17 00:51:06.633813 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.654 [WARNING][5358] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gm8m8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"00ae415f-67f7-4e67-a9d9-4d68f93ea018", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544", Pod:"coredns-66bc5c9577-gm8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedcdd0d4063", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.654 [INFO][5358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.654 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" iface="eth0" netns="" Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.654 [INFO][5358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.654 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.689 [INFO][5369] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.690 [INFO][5369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.690 [INFO][5369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.697 [WARNING][5369] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.697 [INFO][5369] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.699 [INFO][5369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.704974 containerd[1452]: 2026-01-17 00:51:06.702 [INFO][5358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.704974 containerd[1452]: time="2026-01-17T00:51:06.704945223Z" level=info msg="TearDown network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\" successfully" Jan 17 00:51:06.704974 containerd[1452]: time="2026-01-17T00:51:06.704974337Z" level=info msg="StopPodSandbox for \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\" returns successfully" Jan 17 00:51:06.706330 containerd[1452]: time="2026-01-17T00:51:06.706262602Z" level=info msg="RemovePodSandbox for \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\"" Jan 17 00:51:06.706440 containerd[1452]: time="2026-01-17T00:51:06.706400879Z" level=info msg="Forcibly stopping sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\"" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.755 [WARNING][5393] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gm8m8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"00ae415f-67f7-4e67-a9d9-4d68f93ea018", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c693f1ce988915d3523b3741fb432b792be780589540951790964ad418c45544", Pod:"coredns-66bc5c9577-gm8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedcdd0d4063", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.756 [INFO][5393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.756 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" iface="eth0" netns="" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.756 [INFO][5393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.756 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.788 [INFO][5403] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.788 [INFO][5403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.788 [INFO][5403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.801 [WARNING][5403] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.801 [INFO][5403] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" HandleID="k8s-pod-network.f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Workload="localhost-k8s-coredns--66bc5c9577--gm8m8-eth0" Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.804 [INFO][5403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.812826 containerd[1452]: 2026-01-17 00:51:06.808 [INFO][5393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05" Jan 17 00:51:06.812826 containerd[1452]: time="2026-01-17T00:51:06.811805607Z" level=info msg="TearDown network for sandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\" successfully" Jan 17 00:51:06.817578 containerd[1452]: time="2026-01-17T00:51:06.817540052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:06.818097 containerd[1452]: time="2026-01-17T00:51:06.817582020Z" level=info msg="RemovePodSandbox \"f66d97fd21ba1fe3dd1a9de904f82e4da272a9a09b9d8ce4d554d44019234a05\" returns successfully" Jan 17 00:51:06.818375 containerd[1452]: time="2026-01-17T00:51:06.818290230Z" level=info msg="StopPodSandbox for \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\"" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.871 [WARNING][5420] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e797004f-4966-4738-8311-6962046bba3a", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603", Pod:"calico-apiserver-59cdfd4dfb-nd9rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6685fdb1e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.872 [INFO][5420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.872 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" iface="eth0" netns="" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.872 [INFO][5420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.872 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.905 [INFO][5428] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.905 [INFO][5428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.905 [INFO][5428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.913 [WARNING][5428] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.913 [INFO][5428] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.915 [INFO][5428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:06.924181 containerd[1452]: 2026-01-17 00:51:06.921 [INFO][5420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:06.924847 containerd[1452]: time="2026-01-17T00:51:06.924201972Z" level=info msg="TearDown network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\" successfully" Jan 17 00:51:06.924847 containerd[1452]: time="2026-01-17T00:51:06.924235634Z" level=info msg="StopPodSandbox for \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\" returns successfully" Jan 17 00:51:06.925184 containerd[1452]: time="2026-01-17T00:51:06.925061987Z" level=info msg="RemovePodSandbox for \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\"" Jan 17 00:51:06.925184 containerd[1452]: time="2026-01-17T00:51:06.925128161Z" level=info msg="Forcibly stopping sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\"" Jan 17 00:51:06.952484 sshd[5345]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:06.962482 systemd[1]: sshd@13-10.0.0.159:22-10.0.0.1:44870.service: Deactivated successfully. Jan 17 00:51:06.965240 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:51:06.966303 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:51:06.976104 systemd[1]: Started sshd@14-10.0.0.159:22-10.0.0.1:44874.service - OpenSSH per-connection server daemon (10.0.0.1:44874). Jan 17 00:51:06.978446 systemd-logind[1432]: Removed session 14. Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.971 [WARNING][5445] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0", GenerateName:"calico-apiserver-59cdfd4dfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e797004f-4966-4738-8311-6962046bba3a", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cdfd4dfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8923f32580430eeacf7f1f3129226699d0bcc2c4b1ed3010a2f48f3f5d27e603", Pod:"calico-apiserver-59cdfd4dfb-nd9rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6685fdb1e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.971 [INFO][5445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.971 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" iface="eth0" netns="" Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.971 [INFO][5445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.971 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.998 [INFO][5457] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.999 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:06.999 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:07.006 [WARNING][5457] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:07.006 [INFO][5457] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" HandleID="k8s-pod-network.a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Workload="localhost-k8s-calico--apiserver--59cdfd4dfb--nd9rl-eth0" Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:07.008 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:51:07.012970 containerd[1452]: 2026-01-17 00:51:07.010 [INFO][5445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a" Jan 17 00:51:07.013638 containerd[1452]: time="2026-01-17T00:51:07.013017581Z" level=info msg="TearDown network for sandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\" successfully" Jan 17 00:51:07.018426 containerd[1452]: time="2026-01-17T00:51:07.018368855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:51:07.018513 containerd[1452]: time="2026-01-17T00:51:07.018463372Z" level=info msg="RemovePodSandbox \"a0296577054439c7b8b88e0f6159fde67941c228b7fe2da080c907057842667a\" returns successfully" Jan 17 00:51:07.024207 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 44874 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:07.026226 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:07.032538 systemd-logind[1432]: New session 15 of user core. Jan 17 00:51:07.048935 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:51:07.685089 sshd[5455]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:07.694375 systemd[1]: sshd@14-10.0.0.159:22-10.0.0.1:44874.service: Deactivated successfully. Jan 17 00:51:07.696562 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:51:07.698081 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:51:07.710115 systemd[1]: Started sshd@15-10.0.0.159:22-10.0.0.1:44890.service - OpenSSH per-connection server daemon (10.0.0.1:44890). Jan 17 00:51:07.714369 systemd-logind[1432]: Removed session 15. Jan 17 00:51:07.764402 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 44890 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:07.766128 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:07.772536 systemd-logind[1432]: New session 16 of user core. Jan 17 00:51:07.780954 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:51:08.036429 sshd[5481]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:08.048163 systemd[1]: sshd@15-10.0.0.159:22-10.0.0.1:44890.service: Deactivated successfully. Jan 17 00:51:08.051804 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:51:08.054780 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:51:08.067054 systemd[1]: Started sshd@16-10.0.0.159:22-10.0.0.1:44906.service - OpenSSH per-connection server daemon (10.0.0.1:44906). Jan 17 00:51:08.068249 systemd-logind[1432]: Removed session 16. Jan 17 00:51:08.102023 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 44906 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:08.104428 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:08.110479 systemd-logind[1432]: New session 17 of user core. Jan 17 00:51:08.117935 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:51:08.265259 sshd[5493]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:08.270564 systemd[1]: sshd@16-10.0.0.159:22-10.0.0.1:44906.service: Deactivated successfully. Jan 17 00:51:08.273365 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:51:08.274739 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:51:08.276359 systemd-logind[1432]: Removed session 17. Jan 17 00:51:11.444576 containerd[1452]: time="2026-01-17T00:51:11.444529332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:51:11.507312 containerd[1452]: time="2026-01-17T00:51:11.507232407Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:11.508746 containerd[1452]: time="2026-01-17T00:51:11.508616768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:51:11.508814 containerd[1452]: time="2026-01-17T00:51:11.508785993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:51:11.509082 kubelet[2496]: E0117 00:51:11.508900 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:51:11.509082 kubelet[2496]: E0117 00:51:11.508975 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:51:11.509082 kubelet[2496]: E0117 00:51:11.509060 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8pldn_calico-system(4022344e-59ba-4aec-9ee8-9c1779407c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:11.510543 containerd[1452]: time="2026-01-17T00:51:11.510343977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:51:11.579057 containerd[1452]: time="2026-01-17T00:51:11.578930528Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:51:11.580376 containerd[1452]: time="2026-01-17T00:51:11.580261950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:51:11.580376 containerd[1452]: time="2026-01-17T00:51:11.580324384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:51:11.580770 kubelet[2496]: E0117 00:51:11.580582 2496 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:51:11.580869 kubelet[2496]: E0117 00:51:11.580769 2496 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:51:11.580869 kubelet[2496]: E0117 00:51:11.580844 2496 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8pldn_calico-system(4022344e-59ba-4aec-9ee8-9c1779407c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:51:11.581040 kubelet[2496]: E0117 00:51:11.580891 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pldn" podUID="4022344e-59ba-4aec-9ee8-9c1779407c17" Jan 17 00:51:13.284160 systemd[1]: Started sshd@17-10.0.0.159:22-10.0.0.1:59708.service - OpenSSH per-connection server daemon (10.0.0.1:59708). Jan 17 00:51:13.317511 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 59708 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:13.319582 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:13.325977 systemd-logind[1432]: New session 18 of user core. Jan 17 00:51:13.332920 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:51:13.468454 sshd[5517]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:13.474011 systemd[1]: sshd@17-10.0.0.159:22-10.0.0.1:59708.service: Deactivated successfully. Jan 17 00:51:13.476826 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:51:13.477989 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:51:13.479856 systemd-logind[1432]: Removed session 18. Jan 17 00:51:14.446466 kubelet[2496]: E0117 00:51:14.445380 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf56495c7-svn2v" podUID="e26a3e55-fb3a-4994-957c-83980e4edeb6" Jan 17 00:51:17.444337 kubelet[2496]: E0117 00:51:17.444255 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-s7ntg" podUID="fff518d5-06d5-4f2e-9a9a-f374cb758607" Jan 17 00:51:17.444337 kubelet[2496]: E0117 00:51:17.444282 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-nd9rl" podUID="e797004f-4966-4738-8311-6962046bba3a" Jan 17 00:51:18.444230 kubelet[2496]: E0117 00:51:18.444032 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cdfd4dfb-d7ft6" podUID="d9a48e4c-2642-431f-9b1f-b247428bfac1" Jan 17 00:51:18.485912 systemd[1]: Started sshd@18-10.0.0.159:22-10.0.0.1:59710.service - OpenSSH per-connection server daemon (10.0.0.1:59710). Jan 17 00:51:18.528159 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 59710 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:18.530798 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:18.537627 systemd-logind[1432]: New session 19 of user core. Jan 17 00:51:18.550055 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:51:18.677072 sshd[5538]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:18.681913 systemd[1]: sshd@18-10.0.0.159:22-10.0.0.1:59710.service: Deactivated successfully. Jan 17 00:51:18.684071 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:51:18.685106 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:51:18.687141 systemd-logind[1432]: Removed session 19. Jan 17 00:51:19.444343 kubelet[2496]: E0117 00:51:19.444194 2496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6b7bfc9b-vp5zs" podUID="4861c4dc-4420-41d7-806f-ea096c9baa96" Jan 17 00:51:23.693893 systemd[1]: Started sshd@19-10.0.0.159:22-10.0.0.1:33546.service - OpenSSH per-connection server daemon (10.0.0.1:33546). Jan 17 00:51:23.735868 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 33546 ssh2: RSA SHA256:UBEhqR/Avj3dDMUwbulE7593gU6PcEdc1HwaLh6LUCo Jan 17 00:51:23.738083 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:51:23.743901 systemd-logind[1432]: New session 20 of user core. Jan 17 00:51:23.750991 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:51:23.888254 sshd[5554]: pam_unix(sshd:session): session closed for user core Jan 17 00:51:23.892957 systemd[1]: sshd@19-10.0.0.159:22-10.0.0.1:33546.service: Deactivated successfully. Jan 17 00:51:23.895439 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:51:23.896769 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:51:23.898540 systemd-logind[1432]: Removed session 20. Jan 17 00:51:25.443371 kubelet[2496]: E0117 00:51:25.443302 2496 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"