Jan 17 00:15:06.152634 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:15:06.152668 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:15:06.152687 kernel: BIOS-provided physical RAM map: Jan 17 00:15:06.152694 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:15:06.152701 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:15:06.152707 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:15:06.152715 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 00:15:06.152722 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 00:15:06.152728 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:15:06.152738 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:15:06.152745 kernel: NX (Execute Disable) protection: active Jan 17 00:15:06.152751 kernel: APIC: Static calls initialized Jan 17 00:15:06.152764 kernel: SMBIOS 2.8 present. Jan 17 00:15:06.152771 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 00:15:06.152780 kernel: Hypervisor detected: KVM Jan 17 00:15:06.152789 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:15:06.152801 kernel: kvm-clock: using sched offset of 4263189371 cycles Jan 17 00:15:06.152814 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:15:06.152825 kernel: tsc: Detected 1995.305 MHz processor Jan 17 00:15:06.152833 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:15:06.152840 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:15:06.152848 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 00:15:06.152855 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:15:06.152862 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:15:06.152874 kernel: ACPI: Early table checksum verification disabled Jan 17 00:15:06.152887 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 00:15:06.152901 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:06.152915 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:06.152927 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:06.152939 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 00:15:06.152952 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:06.152965 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:06.152979 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:06.153010 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:06.153021 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 17 00:15:06.153032 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 17 00:15:06.153043 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 00:15:06.153055 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 17 00:15:06.153068 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 17 00:15:06.153082 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 17 00:15:06.153105 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 17 00:15:06.153118 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:15:06.153131 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:15:06.153146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:15:06.153154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 00:15:06.153168 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 00:15:06.153183 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 00:15:06.153197 kernel: Zone ranges: Jan 17 00:15:06.153205 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:15:06.153213 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 00:15:06.153220 kernel: Normal empty Jan 17 00:15:06.153228 kernel: Movable zone start for each node Jan 17 00:15:06.153236 kernel: Early memory node ranges Jan 17 00:15:06.153244 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:15:06.153251 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 00:15:06.153258 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 00:15:06.153274 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:15:06.153289 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:15:06.153308 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 00:15:06.153323 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:15:06.153337 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:15:06.153351 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:15:06.153365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:15:06.153379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:15:06.153393 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:15:06.153412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:15:06.153424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:15:06.153435 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:15:06.153449 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:15:06.153464 kernel: TSC deadline timer available Jan 17 00:15:06.153479 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:15:06.153494 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:15:06.153508 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 00:15:06.153680 kernel: Booting paravirtualized kernel on KVM Jan 17 00:15:06.153703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:15:06.153719 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:15:06.153733 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:15:06.153748 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:15:06.153762 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:15:06.153777 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:15:06.153793 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:15:06.153808 kernel: random: crng init done Jan 17 00:15:06.153819 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:15:06.153827 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:15:06.153835 kernel: Fallback order for Node 0: 0 Jan 17 00:15:06.153842 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 00:15:06.153850 kernel: Policy zone: DMA32 Jan 17 00:15:06.153857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:15:06.153865 kernel: Memory: 1971212K/2096612K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 125140K reserved, 0K cma-reserved) Jan 17 00:15:06.153873 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:15:06.153883 kernel: Kernel/User page tables isolation: enabled Jan 17 00:15:06.153899 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:15:06.153909 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:15:06.153917 kernel: Dynamic Preempt: voluntary Jan 17 00:15:06.153925 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:15:06.153946 kernel: rcu: RCU event tracing is enabled. Jan 17 00:15:06.153955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:15:06.153962 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:15:06.153970 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:15:06.153978 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:15:06.153989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:15:06.154002 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:15:06.154010 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:15:06.154021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:15:06.154035 kernel: Console: colour VGA+ 80x25 Jan 17 00:15:06.154048 kernel: printk: console [tty0] enabled Jan 17 00:15:06.154055 kernel: printk: console [ttyS0] enabled Jan 17 00:15:06.154063 kernel: ACPI: Core revision 20230628 Jan 17 00:15:06.154071 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:15:06.154082 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:15:06.154089 kernel: x2apic enabled Jan 17 00:15:06.154097 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:15:06.154104 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:15:06.154112 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985b6280e7, max_idle_ns: 881590416988 ns Jan 17 00:15:06.154120 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995305) Jan 17 00:15:06.154128 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 00:15:06.154135 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 00:15:06.154143 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:15:06.154170 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:15:06.154178 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:15:06.154187 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 00:15:06.154198 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:15:06.154206 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:15:06.154217 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:15:06.154229 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:15:06.154241 kernel: active return thunk: its_return_thunk Jan 17 00:15:06.154260 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:15:06.154280 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:15:06.154296 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:15:06.154313 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:15:06.154328 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:15:06.154343 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:15:06.154359 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:15:06.154374 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:15:06.154384 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:15:06.154396 kernel: landlock: Up and running. Jan 17 00:15:06.154408 kernel: SELinux: Initializing. Jan 17 00:15:06.154422 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:15:06.154434 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:15:06.154449 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 00:15:06.154465 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:15:06.154481 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:15:06.154497 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:15:06.154515 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 00:15:06.155861 kernel: signal: max sigframe size: 1776 Jan 17 00:15:06.155876 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:15:06.155889 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:15:06.155900 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:15:06.155913 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:15:06.155927 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:15:06.155939 kernel: .... node #0, CPUs: #1 Jan 17 00:15:06.155949 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:15:06.155984 kernel: smpboot: Max logical packages: 1 Jan 17 00:15:06.155998 kernel: smpboot: Total of 2 processors activated (7981.22 BogoMIPS) Jan 17 00:15:06.156011 kernel: devtmpfs: initialized Jan 17 00:15:06.156023 kernel: x86/mm: Memory block size: 128MB Jan 17 00:15:06.156036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:15:06.156049 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:15:06.156058 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:15:06.156066 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:15:06.156074 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:15:06.156083 kernel: audit: type=2000 audit(1768608904.581:1): state=initialized audit_enabled=0 res=1 Jan 17 00:15:06.156095 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:15:06.156103 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:15:06.156111 kernel: cpuidle: using governor menu Jan 17 00:15:06.156120 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:15:06.156128 kernel: dca service started, version 1.12.1 Jan 17 00:15:06.156136 kernel: PCI: Using configuration type 1 for base access Jan 17 00:15:06.156145 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:15:06.156153 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:15:06.156165 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:15:06.156173 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:15:06.156181 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:15:06.156190 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:15:06.156197 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:15:06.156206 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:15:06.156214 kernel: ACPI: Interpreter enabled Jan 17 00:15:06.156222 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:15:06.156230 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:15:06.156239 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:15:06.156250 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:15:06.156258 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:15:06.156266 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:15:06.158746 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:15:06.158909 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:15:06.159017 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:15:06.159034 kernel: acpiphp: Slot [3] registered Jan 17 00:15:06.159056 kernel: acpiphp: Slot [4] registered Jan 17 00:15:06.159069 kernel: acpiphp: Slot [5] registered Jan 17 00:15:06.159084 kernel: acpiphp: Slot [6] registered Jan 17 00:15:06.159098 kernel: acpiphp: Slot [7] registered Jan 17 00:15:06.159112 kernel: acpiphp: Slot [8] registered Jan 17 00:15:06.159124 kernel: acpiphp: Slot [9] registered Jan 17 00:15:06.159137 kernel: acpiphp: Slot [10] registered Jan 17 00:15:06.159153 kernel: acpiphp: Slot [11] registered Jan 17 00:15:06.159167 kernel: acpiphp: Slot [12] registered Jan 17 00:15:06.159186 kernel: acpiphp: Slot [13] registered Jan 17 00:15:06.159200 kernel: acpiphp: Slot [14] registered Jan 17 00:15:06.159208 kernel: acpiphp: Slot [15] registered Jan 17 00:15:06.159216 kernel: acpiphp: Slot [16] registered Jan 17 00:15:06.159225 kernel: acpiphp: Slot [17] registered Jan 17 00:15:06.159233 kernel: acpiphp: Slot [18] registered Jan 17 00:15:06.159242 kernel: acpiphp: Slot [19] registered Jan 17 00:15:06.159250 kernel: acpiphp: Slot [20] registered Jan 17 00:15:06.159258 kernel: acpiphp: Slot [21] registered Jan 17 00:15:06.159266 kernel: acpiphp: Slot [22] registered Jan 17 00:15:06.159277 kernel: acpiphp: Slot [23] registered Jan 17 00:15:06.159286 kernel: acpiphp: Slot [24] registered Jan 17 00:15:06.159294 kernel: acpiphp: Slot [25] registered Jan 17 00:15:06.159302 kernel: acpiphp: Slot [26] registered Jan 17 00:15:06.159311 kernel: acpiphp: Slot [27] registered Jan 17 00:15:06.159319 kernel: acpiphp: Slot [28] registered Jan 17 00:15:06.159327 kernel: acpiphp: Slot [29] registered Jan 17 00:15:06.159336 kernel: acpiphp: Slot [30] registered Jan 17 00:15:06.159344 kernel: acpiphp: Slot [31] registered Jan 17 00:15:06.159355 kernel: PCI host bridge to bus 0000:00 Jan 17 00:15:06.159495 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:15:06.159645 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:15:06.159761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:15:06.159850 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:15:06.159937 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 00:15:06.160024 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:15:06.160202 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:15:06.160712 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:15:06.160889 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 00:15:06.161046 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 00:15:06.161199 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 00:15:06.161314 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 00:15:06.161450 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 00:15:06.163121 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 00:15:06.163271 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 00:15:06.163371 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 00:15:06.163484 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:15:06.165699 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 00:15:06.165860 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 00:15:06.165991 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:15:06.166094 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 00:15:06.166232 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 00:15:06.166381 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 00:15:06.166480 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 00:15:06.166681 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:15:06.166856 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:15:06.166957 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 00:15:06.167081 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 00:15:06.167203 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 00:15:06.167313 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:15:06.167412 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 00:15:06.167545 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 00:15:06.167675 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 00:15:06.167789 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 00:15:06.167887 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 00:15:06.167998 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 00:15:06.168125 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 00:15:06.168282 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:15:06.168395 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 00:15:06.168499 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 00:15:06.170887 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 00:15:06.171150 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:15:06.171261 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 00:15:06.171383 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 00:15:06.171549 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 00:15:06.171738 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 00:15:06.171855 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 00:15:06.171951 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 00:15:06.171964 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:15:06.171973 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:15:06.171982 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:15:06.171990 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:15:06.171999 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:15:06.172011 kernel: iommu: Default domain type: Translated Jan 17 00:15:06.172020 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:15:06.172028 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:15:06.172172 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:15:06.172181 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:15:06.172190 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 00:15:06.172297 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 00:15:06.172392 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 00:15:06.172487 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:15:06.172503 kernel: vgaarb: loaded Jan 17 00:15:06.172511 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:15:06.176383 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:15:06.176413 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:15:06.176428 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:15:06.176441 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:15:06.176450 kernel: pnp: PnP ACPI init Jan 17 00:15:06.176459 kernel: pnp: PnP ACPI: found 4 devices Jan 17 00:15:06.176468 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:15:06.176487 kernel: NET: Registered PF_INET protocol family Jan 17 00:15:06.176495 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:15:06.176504 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:15:06.176512 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:15:06.176534 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:15:06.176543 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:15:06.176551 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:15:06.176560 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:15:06.176569 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:15:06.176581 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:15:06.176589 kernel: NET: Registered PF_XDP protocol family Jan 17 00:15:06.176760 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:15:06.176853 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:15:06.176938 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:15:06.177052 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:15:06.177149 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 00:15:06.177278 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 00:15:06.177391 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:15:06.177404 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:15:06.177505 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 34943 usecs Jan 17 00:15:06.177538 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:15:06.177548 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:15:06.177556 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985b6280e7, max_idle_ns: 881590416988 ns Jan 17 00:15:06.177565 kernel: Initialise system trusted keyrings Jan 17 00:15:06.177574 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:15:06.177587 kernel: Key type asymmetric registered Jan 17 00:15:06.177595 kernel: Asymmetric key parser 'x509' registered Jan 17 00:15:06.177605 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:15:06.177613 kernel: io scheduler mq-deadline registered Jan 17 00:15:06.177622 kernel: io scheduler kyber registered Jan 17 00:15:06.177630 kernel: io scheduler bfq registered Jan 17 00:15:06.177639 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:15:06.177648 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 00:15:06.177656 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:15:06.177668 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:15:06.177677 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:15:06.177685 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:15:06.177694 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:15:06.177702 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:15:06.177711 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:15:06.177719 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:15:06.177853 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:15:06.177964 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:15:06.178060 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:15:05 UTC (1768608905) Jan 17 00:15:06.178150 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 00:15:06.178161 kernel: intel_pstate: CPU model not supported Jan 17 00:15:06.178169 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:15:06.178209 kernel: Segment Routing with IPv6 Jan 17 00:15:06.178218 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:15:06.178227 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:15:06.178235 kernel: Key type dns_resolver registered Jan 17 00:15:06.178249 kernel: IPI shorthand broadcast: enabled Jan 17 00:15:06.178257 kernel: sched_clock: Marking stable (1282009338, 260315841)->(1626909934, -84584755) Jan 17 00:15:06.178266 kernel: registered taskstats version 1 Jan 17 00:15:06.178274 kernel: Loading compiled-in X.509 certificates Jan 17 00:15:06.178283 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:15:06.178291 kernel: Key type .fscrypt registered Jan 17 00:15:06.178300 kernel: Key type fscrypt-provisioning registered Jan 17 00:15:06.178308 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:15:06.178317 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:15:06.178329 kernel: ima: No architecture policies found Jan 17 00:15:06.178337 kernel: clk: Disabling unused clocks Jan 17 00:15:06.178345 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:15:06.178354 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:15:06.178362 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:15:06.178390 kernel: Run /init as init process Jan 17 00:15:06.178402 kernel: with arguments: Jan 17 00:15:06.178411 kernel: /init Jan 17 00:15:06.178419 kernel: with environment: Jan 17 00:15:06.178430 kernel: HOME=/ Jan 17 00:15:06.178439 kernel: TERM=linux Jan 17 00:15:06.178451 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:15:06.178463 systemd[1]: Detected virtualization kvm. Jan 17 00:15:06.178473 systemd[1]: Detected architecture x86-64. Jan 17 00:15:06.178481 systemd[1]: Running in initrd. Jan 17 00:15:06.178490 systemd[1]: No hostname configured, using default hostname. Jan 17 00:15:06.178499 systemd[1]: Hostname set to . Jan 17 00:15:06.178513 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:15:06.178538 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:15:06.178547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:15:06.178556 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:15:06.178566 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:15:06.178575 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:15:06.178587 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:15:06.178602 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:15:06.178624 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:15:06.178640 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:15:06.178655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:15:06.178667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:15:06.178676 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:15:06.178685 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:15:06.178699 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:15:06.178711 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:15:06.178720 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:15:06.178730 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:15:06.178739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:15:06.178748 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:15:06.178760 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:15:06.178769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:15:06.178779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:15:06.178787 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:15:06.178796 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:15:06.178806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:15:06.178815 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:15:06.178824 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:15:06.178836 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:15:06.178845 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:15:06.178854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:06.178863 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:15:06.178908 systemd-journald[185]: Collecting audit messages is disabled. Jan 17 00:15:06.178936 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:15:06.178945 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:15:06.178955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:15:06.178966 systemd-journald[185]: Journal started Jan 17 00:15:06.178991 systemd-journald[185]: Runtime Journal (/run/log/journal/48527ad7480b44b3be936c72cb803bff) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:15:06.158887 systemd-modules-load[186]: Inserted module 'overlay' Jan 17 00:15:06.185560 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:15:06.207556 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:15:06.206961 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:15:06.287080 kernel: Bridge firewalling registered Jan 17 00:15:06.210187 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 17 00:15:06.288222 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:15:06.293342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:06.301412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:15:06.304623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:15:06.312792 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:15:06.316225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:15:06.327204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:15:06.337831 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:06.338925 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:15:06.347916 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:15:06.353756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:15:06.355803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:15:06.363545 dracut-cmdline[218]: dracut-dracut-053 Jan 17 00:15:06.364974 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:15:06.407497 systemd-resolved[219]: Positive Trust Anchors: Jan 17 00:15:06.407530 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:15:06.407566 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:15:06.416466 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 17 00:15:06.418405 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:15:06.419810 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:15:06.474622 kernel: SCSI subsystem initialized Jan 17 00:15:06.488572 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:15:06.503557 kernel: iscsi: registered transport (tcp) Jan 17 00:15:06.530741 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:15:06.530854 kernel: QLogic iSCSI HBA Driver Jan 17 00:15:06.591196 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:15:06.600896 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:15:06.638707 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:15:06.638822 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:15:06.638844 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:15:06.695601 kernel: raid6: avx2x4 gen() 27303 MB/s Jan 17 00:15:06.712600 kernel: raid6: avx2x2 gen() 28291 MB/s Jan 17 00:15:06.730805 kernel: raid6: avx2x1 gen() 22173 MB/s Jan 17 00:15:06.730939 kernel: raid6: using algorithm avx2x2 gen() 28291 MB/s Jan 17 00:15:06.750711 kernel: raid6: .... xor() 16072 MB/s, rmw enabled Jan 17 00:15:06.750830 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:15:06.779566 kernel: xor: automatically using best checksumming function avx Jan 17 00:15:06.956567 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:15:06.971479 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:15:06.980874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:15:06.998436 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 17 00:15:07.004360 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:15:07.011759 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:15:07.033299 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 00:15:07.087826 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:15:07.096972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:15:07.184167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:15:07.192813 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:15:07.231696 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:15:07.237144 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:15:07.238666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:15:07.240175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:15:07.249836 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:15:07.273351 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:15:07.303552 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 00:15:07.310375 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 00:15:07.320604 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:15:07.324836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:15:07.324925 kernel: GPT:9289727 != 125829119 Jan 17 00:15:07.324945 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:15:07.326634 kernel: GPT:9289727 != 125829119 Jan 17 00:15:07.327809 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:15:07.329824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:07.335320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:15:07.336693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:07.338859 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:15:07.348440 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:15:07.339818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:07.340070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:07.344701 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:07.354733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:07.368615 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 00:15:07.374122 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 17 00:15:07.385073 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:15:07.385171 kernel: AES CTR mode by8 optimization enabled Jan 17 00:15:07.435556 kernel: libata version 3.00 loaded. Jan 17 00:15:07.442573 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 00:15:07.446550 kernel: scsi host1: ata_piix Jan 17 00:15:07.460028 kernel: scsi host2: ata_piix Jan 17 00:15:07.460390 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 00:15:07.460415 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 00:15:07.486574 kernel: ACPI: bus type USB registered Jan 17 00:15:07.486686 kernel: usbcore: registered new interface driver usbfs Jan 17 00:15:07.486706 kernel: usbcore: registered new interface driver hub Jan 17 00:15:07.486725 kernel: usbcore: registered new device driver usb Jan 17 00:15:07.505574 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Jan 17 00:15:07.517562 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (461) Jan 17 00:15:07.532459 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:15:07.579500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:07.586246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:15:07.595396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:15:07.596331 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:15:07.603686 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:15:07.612838 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:15:07.619864 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:15:07.625113 disk-uuid[537]: Primary Header is updated. Jan 17 00:15:07.625113 disk-uuid[537]: Secondary Entries is updated. Jan 17 00:15:07.625113 disk-uuid[537]: Secondary Header is updated. Jan 17 00:15:07.642585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:07.656934 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:07.679950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:07.697408 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 00:15:07.699229 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 00:15:07.699363 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 00:15:07.699504 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 00:15:07.722546 kernel: hub 1-0:1.0: USB hub found Jan 17 00:15:07.733660 kernel: hub 1-0:1.0: 2 ports detected Jan 17 00:15:08.657590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:08.658556 disk-uuid[539]: The operation has completed successfully. Jan 17 00:15:08.710047 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:15:08.711626 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:15:08.724951 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:15:08.731662 sh[564]: Success Jan 17 00:15:08.750640 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:15:08.821830 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:15:08.832901 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:15:08.837260 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:15:08.885606 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:15:08.885740 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:08.885759 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:15:08.885772 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:15:08.885785 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:15:08.894936 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:15:08.896839 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:15:08.905915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:15:08.911861 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:15:08.924188 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:08.924317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:08.924340 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:15:08.930553 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:15:08.943198 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:15:08.945416 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:08.953625 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:15:08.962879 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:15:09.123657 ignition[645]: Ignition 2.19.0 Jan 17 00:15:09.127498 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:15:09.123700 ignition[645]: Stage: fetch-offline Jan 17 00:15:09.129200 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:15:09.123774 ignition[645]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:09.123791 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:09.123983 ignition[645]: parsed url from cmdline: "" Jan 17 00:15:09.123990 ignition[645]: no config URL provided Jan 17 00:15:09.124000 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:15:09.124013 ignition[645]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:15:09.124024 ignition[645]: failed to fetch config: resource requires networking Jan 17 00:15:09.124386 ignition[645]: Ignition finished successfully Jan 17 00:15:09.139952 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:15:09.173080 systemd-networkd[753]: lo: Link UP Jan 17 00:15:09.173096 systemd-networkd[753]: lo: Gained carrier Jan 17 00:15:09.175847 systemd-networkd[753]: Enumeration completed Jan 17 00:15:09.176002 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:15:09.177092 systemd[1]: Reached target network.target - Network. Jan 17 00:15:09.177679 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:15:09.177683 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 00:15:09.178491 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:15:09.178494 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:15:09.179225 systemd-networkd[753]: eth0: Link UP Jan 17 00:15:09.179229 systemd-networkd[753]: eth0: Gained carrier Jan 17 00:15:09.179237 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:15:09.185930 systemd-networkd[753]: eth1: Link UP Jan 17 00:15:09.185935 systemd-networkd[753]: eth1: Gained carrier Jan 17 00:15:09.185950 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:15:09.188782 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:15:09.197634 systemd-networkd[753]: eth0: DHCPv4 address 159.223.199.43/20, gateway 159.223.192.1 acquired from 169.254.169.253 Jan 17 00:15:09.202637 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Jan 17 00:15:09.227356 ignition[755]: Ignition 2.19.0 Jan 17 00:15:09.227371 ignition[755]: Stage: fetch Jan 17 00:15:09.227602 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:09.227613 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:09.227746 ignition[755]: parsed url from cmdline: "" Jan 17 00:15:09.227750 ignition[755]: no config URL provided Jan 17 00:15:09.227757 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:15:09.227766 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:15:09.227788 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 00:15:09.243767 ignition[755]: GET result: OK Jan 17 00:15:09.244788 ignition[755]: parsing config with SHA512: e40944e659df1ab8fd614baf4178b58df47b203fc9d9b4edd9896d59cf4ddcdef461d1639ad640da1a26f8e994a9dd1f94ef4017d5290c42079fc869ae35ca0d Jan 17 00:15:09.252681 unknown[755]: fetched base config from "system" Jan 17 00:15:09.252695 unknown[755]: fetched base config from "system" Jan 17 00:15:09.253440 ignition[755]: fetch: fetch complete Jan 17 00:15:09.252725 unknown[755]: fetched user config from "digitalocean" Jan 17 00:15:09.253446 ignition[755]: fetch: fetch passed Jan 17 00:15:09.256413 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:15:09.253554 ignition[755]: Ignition finished successfully Jan 17 00:15:09.265917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:15:09.288662 ignition[762]: Ignition 2.19.0 Jan 17 00:15:09.288676 ignition[762]: Stage: kargs Jan 17 00:15:09.289062 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:09.292143 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:15:09.289079 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:09.290126 ignition[762]: kargs: kargs passed Jan 17 00:15:09.290186 ignition[762]: Ignition finished successfully Jan 17 00:15:09.307660 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:15:09.324511 ignition[768]: Ignition 2.19.0 Jan 17 00:15:09.324542 ignition[768]: Stage: disks Jan 17 00:15:09.324844 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:09.327348 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:15:09.324861 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:09.330507 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:15:09.325945 ignition[768]: disks: disks passed Jan 17 00:15:09.331749 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:15:09.326006 ignition[768]: Ignition finished successfully Jan 17 00:15:09.340594 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:15:09.342280 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:15:09.343853 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:15:09.351905 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:15:09.383260 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:15:09.387570 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:15:09.395495 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:15:09.519577 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:15:09.520936 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:15:09.522655 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:15:09.530775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:15:09.540194 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:15:09.545836 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 00:15:09.548866 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:15:09.575075 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (785) Jan 17 00:15:09.575113 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:09.575136 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:09.575156 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:15:09.553071 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:15:09.553134 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:15:09.581956 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:15:09.582974 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:15:09.593742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:15:09.613505 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:15:09.655156 coreos-metadata[787]: Jan 17 00:15:09.654 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:09.666452 coreos-metadata[787]: Jan 17 00:15:09.666 INFO Fetch successful Jan 17 00:15:09.675749 coreos-metadata[788]: Jan 17 00:15:09.675 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:09.674458 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 00:15:09.674662 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 00:15:09.685331 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:15:09.688512 coreos-metadata[788]: Jan 17 00:15:09.688 INFO Fetch successful Jan 17 00:15:09.694483 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:15:09.696914 coreos-metadata[788]: Jan 17 00:15:09.696 INFO wrote hostname ci-4081.3.6-n-cccb0c3e85 to /sysroot/etc/hostname Jan 17 00:15:09.698595 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:15:09.703236 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:15:09.710576 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:15:09.823262 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:15:09.829791 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:15:09.832763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:15:09.849556 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:09.871477 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:15:09.879323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:15:09.881622 ignition[906]: INFO : Ignition 2.19.0 Jan 17 00:15:09.881622 ignition[906]: INFO : Stage: mount Jan 17 00:15:09.886034 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:09.886034 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:09.888230 ignition[906]: INFO : mount: mount passed Jan 17 00:15:09.888230 ignition[906]: INFO : Ignition finished successfully Jan 17 00:15:09.889442 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:15:09.895782 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:15:09.922957 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:15:09.936655 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Jan 17 00:15:09.941887 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:09.942017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:09.945552 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:15:09.951585 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:15:09.953484 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:15:09.983981 ignition[934]: INFO : Ignition 2.19.0 Jan 17 00:15:09.985370 ignition[934]: INFO : Stage: files Jan 17 00:15:09.987451 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:09.987451 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:09.987451 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:15:09.990361 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:15:09.990361 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:15:09.995054 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:15:09.996637 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:15:09.998794 unknown[934]: wrote ssh authorized keys file for user: core Jan 17 00:15:10.000282 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:15:10.001945 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:15:10.003104 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:15:10.003104 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:15:10.003104 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:15:10.197506 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:15:10.275501 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:15:10.277187 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:15:10.287767 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:10.287767 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:10.287767 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:10.287767 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:15:10.349751 systemd-networkd[753]: eth0: Gained IPv6LL Jan 17 00:15:10.761873 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:15:10.797762 systemd-networkd[753]: eth1: Gained IPv6LL Jan 17 00:15:11.278810 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:11.280882 ignition[934]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:15:11.283052 ignition[934]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:15:11.285132 ignition[934]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:15:11.285132 ignition[934]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:15:11.285132 ignition[934]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:15:11.285132 ignition[934]: INFO : files: files passed Jan 17 00:15:11.285132 ignition[934]: INFO : Ignition finished successfully Jan 17 00:15:11.285560 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:15:11.296729 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:15:11.302146 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:15:11.307814 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:15:11.307961 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:15:11.326928 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:15:11.326928 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:15:11.330420 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:15:11.334592 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:15:11.336758 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:15:11.344898 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:15:11.397149 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:15:11.397283 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:15:11.399452 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:15:11.401012 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:15:11.402862 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:15:11.411002 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:15:11.437565 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:15:11.442809 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:15:11.464908 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:15:11.466171 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:15:11.469770 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:15:11.470808 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:15:11.471024 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:15:11.473640 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:15:11.474804 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:15:11.476782 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:15:11.478589 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:15:11.480336 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:15:11.482318 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:15:11.484205 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:15:11.486345 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:15:11.488166 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:15:11.490104 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:15:11.492032 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:15:11.492249 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:15:11.494562 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:15:11.495681 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:15:11.497445 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:15:11.498043 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:15:11.499390 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:15:11.499632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:15:11.502062 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:15:11.502246 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:15:11.503334 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:15:11.503493 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:15:11.504854 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:15:11.505041 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:15:11.516004 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:15:11.517120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:15:11.517549 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:15:11.530991 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:15:11.531983 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:15:11.532313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:15:11.534747 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:15:11.535729 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:15:11.550390 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:15:11.550582 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:15:11.580251 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:15:11.584482 ignition[987]: INFO : Ignition 2.19.0 Jan 17 00:15:11.586962 ignition[987]: INFO : Stage: umount Jan 17 00:15:11.589555 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:11.589555 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:11.592810 ignition[987]: INFO : umount: umount passed Jan 17 00:15:11.592810 ignition[987]: INFO : Ignition finished successfully Jan 17 00:15:11.592006 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:15:11.592190 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:15:11.595375 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:15:11.595497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:15:11.598205 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:15:11.598294 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:15:11.599811 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:15:11.599869 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:15:11.601417 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:15:11.601497 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:15:11.603232 systemd[1]: Stopped target network.target - Network. Jan 17 00:15:11.604761 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:15:11.604851 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:15:11.606364 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:15:11.607806 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:15:11.612637 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:15:11.613808 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:15:11.615455 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:15:11.616917 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:15:11.617059 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:15:11.618653 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:15:11.618701 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:15:11.620620 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:15:11.620701 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:15:11.622222 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:15:11.622275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:15:11.623740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:15:11.623785 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:15:11.625689 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:15:11.627563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:15:11.630616 systemd-networkd[753]: eth1: DHCPv6 lease lost Jan 17 00:15:11.634591 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:15:11.635126 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:15:11.635768 systemd-networkd[753]: eth0: DHCPv6 lease lost Jan 17 00:15:11.641695 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:15:11.641860 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:15:11.643858 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:15:11.643935 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:15:11.651766 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:15:11.652908 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:15:11.653079 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:15:11.655983 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:15:11.656083 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:15:11.658974 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:15:11.659076 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:15:11.661089 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:15:11.661173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:15:11.663577 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:15:11.678382 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:15:11.679598 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:15:11.683450 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:15:11.685442 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:15:11.686660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:15:11.686720 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:15:11.687512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:15:11.687632 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:15:11.690509 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:15:11.690616 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:15:11.692249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:15:11.692319 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:11.703919 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:15:11.707070 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:15:11.707198 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:15:11.711188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:11.711293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:11.716424 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:15:11.716593 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:15:11.718603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:15:11.718729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:15:11.721114 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:15:11.729896 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:15:11.742599 systemd[1]: Switching root. Jan 17 00:15:11.794174 systemd-journald[185]: Journal stopped Jan 17 00:15:13.164690 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 17 00:15:13.164792 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:15:13.164809 kernel: SELinux: policy capability open_perms=1 Jan 17 00:15:13.164827 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:15:13.164838 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:15:13.164849 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:15:13.164866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:15:13.164882 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:15:13.164893 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:15:13.164904 kernel: audit: type=1403 audit(1768608912.103:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:15:13.164917 systemd[1]: Successfully loaded SELinux policy in 50.645ms. Jan 17 00:15:13.164961 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.035ms. Jan 17 00:15:13.164981 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:15:13.165000 systemd[1]: Detected virtualization kvm. Jan 17 00:15:13.165017 systemd[1]: Detected architecture x86-64. Jan 17 00:15:13.165032 systemd[1]: Detected first boot. Jan 17 00:15:13.165044 systemd[1]: Hostname set to . Jan 17 00:15:13.165057 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:15:13.165073 zram_generator::config[1049]: No configuration found. Jan 17 00:15:13.165086 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:15:13.165099 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:15:13.165112 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:15:13.165124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:15:13.165140 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:15:13.165151 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:15:13.165163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:15:13.165176 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:15:13.165188 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:15:13.165200 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:15:13.165212 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:15:13.165223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:15:13.165234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:15:13.165250 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:15:13.165262 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:15:13.165276 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:15:13.165294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:15:13.165309 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:15:13.165320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:15:13.165331 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:15:13.165345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:15:13.165360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:15:13.165372 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:15:13.165383 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:15:13.165396 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:15:13.165407 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:15:13.165419 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:15:13.165430 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:15:13.165444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:15:13.165457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:15:13.165468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:15:13.165479 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:15:13.165497 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:15:13.165509 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:15:13.165537 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:15:13.165550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:13.165562 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:15:13.165577 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:15:13.165589 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:15:13.165601 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:15:13.165613 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:13.165627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:15:13.165638 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:15:13.165650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:13.165661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:15:13.165674 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:13.165688 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:15:13.165700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:13.165712 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:15:13.165724 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:15:13.165737 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:15:13.165748 kernel: fuse: init (API version 7.39) Jan 17 00:15:13.165759 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:15:13.165771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:15:13.165785 kernel: ACPI: bus type drm_connector registered Jan 17 00:15:13.165796 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:15:13.165807 kernel: loop: module loaded Jan 17 00:15:13.165819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:15:13.165868 systemd-journald[1142]: Collecting audit messages is disabled. Jan 17 00:15:13.165896 systemd-journald[1142]: Journal started Jan 17 00:15:13.165922 systemd-journald[1142]: Runtime Journal (/run/log/journal/48527ad7480b44b3be936c72cb803bff) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:15:13.173563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:15:13.182561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:13.190563 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:15:13.192302 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:15:13.195785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:15:13.196840 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:15:13.197800 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:15:13.198806 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:15:13.199847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:15:13.201039 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:15:13.202288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:15:13.203396 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:15:13.203668 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:15:13.204797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:13.204980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:13.206312 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:15:13.206956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:15:13.215286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:13.215675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:13.216905 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:15:13.217211 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:15:13.218293 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:13.218803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:13.220135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:15:13.221466 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:15:13.222880 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:15:13.236364 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:15:13.244794 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:15:13.249721 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:15:13.256113 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:15:13.264858 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:15:13.278858 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:15:13.279757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:13.284970 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:15:13.285854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:13.300891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:15:13.308830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:15:13.309977 systemd-journald[1142]: Time spent on flushing to /var/log/journal/48527ad7480b44b3be936c72cb803bff is 56.975ms for 970 entries. Jan 17 00:15:13.309977 systemd-journald[1142]: System Journal (/var/log/journal/48527ad7480b44b3be936c72cb803bff) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:15:13.392362 systemd-journald[1142]: Received client request to flush runtime journal. Jan 17 00:15:13.315507 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:15:13.316747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:15:13.317833 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:15:13.326763 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:15:13.334666 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:15:13.338998 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:15:13.369102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:15:13.376469 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:15:13.394890 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:15:13.404004 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 17 00:15:13.404024 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 17 00:15:13.411310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:15:13.422911 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:15:13.461294 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:15:13.471010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:15:13.500065 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 17 00:15:13.500100 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 17 00:15:13.512213 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:15:14.014771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:15:14.031037 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:15:14.060993 systemd-udevd[1219]: Using default interface naming scheme 'v255'. Jan 17 00:15:14.083894 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:15:14.092839 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:15:14.123756 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:15:14.158468 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:15:14.172138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:14.172300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:14.177747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:14.184868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:14.198754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:14.199859 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:15:14.199916 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:15:14.199966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:14.212010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:14.212219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:14.218951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:14.219148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:14.221783 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:14.222014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:14.223968 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:15:14.229384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:14.229462 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:14.310558 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1228) Jan 17 00:15:14.323477 systemd-networkd[1225]: lo: Link UP Jan 17 00:15:14.324028 systemd-networkd[1225]: lo: Gained carrier Jan 17 00:15:14.326933 systemd-networkd[1225]: Enumeration completed Jan 17 00:15:14.327208 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:15:14.329651 systemd-networkd[1225]: eth0: Configuring with /run/systemd/network/10-72:25:65:91:28:9d.network. Jan 17 00:15:14.331441 systemd-networkd[1225]: eth1: Configuring with /run/systemd/network/10-3a:65:9b:92:e1:39.network. Jan 17 00:15:14.332396 systemd-networkd[1225]: eth0: Link UP Jan 17 00:15:14.332707 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:15:14.334328 systemd-networkd[1225]: eth0: Gained carrier Jan 17 00:15:14.338938 systemd-networkd[1225]: eth1: Link UP Jan 17 00:15:14.339056 systemd-networkd[1225]: eth1: Gained carrier Jan 17 00:15:14.364563 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:15:14.404550 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 00:15:14.421556 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:15:14.444397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:15:14.462550 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:15:14.493589 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:15:14.498919 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 00:15:14.501599 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 00:15:14.509967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:14.517563 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:15:14.518550 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:15:14.518598 kernel: [drm] features: -context_init Jan 17 00:15:14.524485 kernel: [drm] number of scanouts: 1 Jan 17 00:15:14.525626 kernel: [drm] number of cap sets: 0 Jan 17 00:15:14.530554 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 00:15:14.538080 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:15:14.538164 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:15:14.540083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:14.540368 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:14.559680 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:15:14.582010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:14.586133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:14.586402 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:14.604626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:14.731568 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:15:14.749782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:14.757651 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:15:14.770898 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:15:14.790843 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:15:14.829355 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:15:14.831865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:15:14.838879 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:15:14.854431 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:15:14.891413 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:15:14.892386 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:15:14.899755 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 00:15:14.900777 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:15:14.900845 systemd[1]: Reached target machines.target - Containers. Jan 17 00:15:14.906918 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:15:14.925742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:15:14.931561 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 00:15:14.934794 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 00:15:14.938378 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:15:14.942174 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:15:14.948811 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:15:14.960438 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:15:14.962689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:14.968709 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:15:14.996974 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:15:15.004787 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:15:15.018921 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:15:15.030209 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:15:15.032788 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:15:15.053915 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:15:15.084383 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 00:15:15.124320 kernel: loop2: detected capacity change from 0 to 8 Jan 17 00:15:15.146768 kernel: loop3: detected capacity change from 0 to 224512 Jan 17 00:15:15.191100 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:15:15.216572 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:15:15.232744 kernel: loop6: detected capacity change from 0 to 8 Jan 17 00:15:15.236577 kernel: loop7: detected capacity change from 0 to 224512 Jan 17 00:15:15.245391 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 00:15:15.246014 (sd-merge)[1310]: Merged extensions into '/usr'. Jan 17 00:15:15.262143 systemd[1]: Reloading requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:15:15.262162 systemd[1]: Reloading... Jan 17 00:15:15.371058 zram_generator::config[1336]: No configuration found. Jan 17 00:15:15.470277 systemd-networkd[1225]: eth1: Gained IPv6LL Jan 17 00:15:15.544897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:15.614543 ldconfig[1298]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:15:15.623213 systemd[1]: Reloading finished in 360 ms. Jan 17 00:15:15.645817 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:15:15.647657 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:15:15.650264 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:15:15.666837 systemd[1]: Starting ensure-sysext.service... Jan 17 00:15:15.671716 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:15:15.680317 systemd[1]: Reloading requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:15:15.680340 systemd[1]: Reloading... Jan 17 00:15:15.724213 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:15:15.724564 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:15:15.725412 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:15:15.726771 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 17 00:15:15.726844 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 17 00:15:15.734493 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:15:15.734509 systemd-tmpfiles[1392]: Skipping /boot Jan 17 00:15:15.753444 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:15:15.753461 systemd-tmpfiles[1392]: Skipping /boot Jan 17 00:15:15.760559 zram_generator::config[1419]: No configuration found. Jan 17 00:15:15.919444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:15.987316 systemd[1]: Reloading finished in 306 ms. Jan 17 00:15:16.012609 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:15:16.029166 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:16.035983 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:15:16.047877 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:15:16.056150 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:15:16.066272 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:15:16.076003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:16.078253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:16.095056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:16.105854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:16.111226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:16.113161 systemd-networkd[1225]: eth0: Gained IPv6LL Jan 17 00:15:16.117739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:16.118104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:16.124387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:16.126749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:16.142745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:15:16.154034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:16.154267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:16.158891 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:16.159830 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:16.173302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:16.173602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:16.188408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:16.195699 augenrules[1506]: No rules Jan 17 00:15:16.196844 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:16.203678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:16.204478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:16.228907 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:15:16.229450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:16.237388 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:16.241657 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:15:16.244771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:15:16.248357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:16.248826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:16.257671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:16.257900 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:16.264244 systemd-resolved[1480]: Positive Trust Anchors: Jan 17 00:15:16.264257 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:15:16.264304 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:15:16.271923 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:16.273055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:16.274659 systemd-resolved[1480]: Using system hostname 'ci-4081.3.6-n-cccb0c3e85'. Jan 17 00:15:16.277972 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:15:16.282298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:15:16.291761 systemd[1]: Reached target network.target - Network. Jan 17 00:15:16.292414 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:15:16.295633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:15:16.297671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:16.298059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:16.309986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:16.314886 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:15:16.324114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:16.329221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:16.331623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:16.331834 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:15:16.331931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:16.335656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:16.335855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:16.338510 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:15:16.338751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:15:16.340195 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:16.340437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:16.347422 systemd[1]: Finished ensure-sysext.service. Jan 17 00:15:16.356318 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:16.356748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:16.359763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:16.359932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:16.369968 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:15:16.429721 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:15:16.431023 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:15:16.433500 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:15:16.434106 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:15:16.435166 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:15:16.437469 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:15:16.437556 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:15:16.438160 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:15:16.440982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:15:16.441953 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:15:16.445243 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:15:16.446332 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:15:16.450874 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:15:16.455578 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:15:16.457585 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:15:16.458155 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:15:16.458636 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:15:16.459262 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:15:16.459305 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:15:16.459331 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:15:16.463704 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:15:16.467015 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:15:16.478903 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:15:17.009531 systemd-timesyncd[1542]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Jan 17 00:15:17.009942 systemd-timesyncd[1542]: Initial clock synchronization to Sat 2026-01-17 00:15:17.009353 UTC. Jan 17 00:15:17.010083 systemd-resolved[1480]: Clock change detected. Flushing caches. Jan 17 00:15:17.015038 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:15:17.021101 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:15:17.024389 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:15:17.029401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:17.035179 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:15:17.043652 jq[1550]: false Jan 17 00:15:17.051083 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:15:17.063914 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:15:17.073462 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:15:17.093097 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:15:17.099176 coreos-metadata[1547]: Jan 17 00:15:17.098 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:17.103069 dbus-daemon[1548]: [system] SELinux support is enabled Jan 17 00:15:17.109122 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:15:17.110402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:15:17.120948 coreos-metadata[1547]: Jan 17 00:15:17.111 INFO Fetch successful Jan 17 00:15:17.121426 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:15:17.140136 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:15:17.142813 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:15:17.166979 update_engine[1566]: I20260117 00:15:17.161623 1566 main.cc:92] Flatcar Update Engine starting Jan 17 00:15:17.166979 update_engine[1566]: I20260117 00:15:17.163841 1566 update_check_scheduler.cc:74] Next update check in 6m59s Jan 17 00:15:17.168237 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:15:17.168569 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:15:17.190804 extend-filesystems[1553]: Found loop4 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found loop5 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found loop6 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found loop7 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda1 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda2 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda3 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found usr Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda4 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda6 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda7 Jan 17 00:15:17.190804 extend-filesystems[1553]: Found vda9 Jan 17 00:15:17.190804 extend-filesystems[1553]: Checking size of /dev/vda9 Jan 17 00:15:17.535271 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1220) Jan 17 00:15:17.541051 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 00:15:17.200521 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:15:17.541389 extend-filesystems[1553]: Resized partition /dev/vda9 Jan 17 00:15:17.200875 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:15:17.546383 extend-filesystems[1600]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:15:17.558399 jq[1570]: true Jan 17 00:15:17.233481 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:15:17.233818 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:15:17.286818 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:15:17.572670 tar[1585]: linux-amd64/LICENSE Jan 17 00:15:17.572670 tar[1585]: linux-amd64/helm Jan 17 00:15:17.500808 (ntainerd)[1599]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:15:17.577217 jq[1588]: true Jan 17 00:15:17.591634 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:15:17.623657 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:15:17.635268 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:15:17.635465 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:15:17.635512 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:15:17.640199 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:15:17.640293 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 00:15:17.640332 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:15:17.645631 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:15:17.656456 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:15:17.704603 systemd-logind[1564]: New seat seat0. Jan 17 00:15:17.729800 systemd-logind[1564]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:15:17.729853 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:15:17.730150 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:15:17.816511 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:15:17.822299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:15:17.842598 systemd[1]: Starting sshkeys.service... Jan 17 00:15:17.900851 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:15:17.907956 containerd[1599]: time="2026-01-17T00:15:17.902629574Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:15:17.912983 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:15:17.948857 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 00:15:17.969315 containerd[1599]: time="2026-01-17T00:15:17.969256212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:17.971924 coreos-metadata[1644]: Jan 17 00:15:17.970 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:17.977015 extend-filesystems[1600]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:15:17.977015 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 00:15:17.977015 extend-filesystems[1600]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 00:15:18.007692 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Jan 17 00:15:18.007692 extend-filesystems[1553]: Found vdb Jan 17 00:15:18.009326 coreos-metadata[1644]: Jan 17 00:15:17.986 INFO Fetch successful Jan 17 00:15:17.977589 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.981330249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.981374217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.981393712Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.981568910Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.981585867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.981642642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.981655695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.985423945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.985459187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.985476994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:18.009524 containerd[1599]: time="2026-01-17T00:15:17.985487321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:17.984324 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:15:18.010902 containerd[1599]: time="2026-01-17T00:15:17.986375452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:18.010902 containerd[1599]: time="2026-01-17T00:15:17.987710264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:18.010902 containerd[1599]: time="2026-01-17T00:15:17.987961310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:18.010902 containerd[1599]: time="2026-01-17T00:15:17.987978915Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:15:18.010902 containerd[1599]: time="2026-01-17T00:15:17.988090771Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:15:18.010902 containerd[1599]: time="2026-01-17T00:15:17.988133155Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:15:18.011930 containerd[1599]: time="2026-01-17T00:15:18.011308287Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:15:18.011930 containerd[1599]: time="2026-01-17T00:15:18.011424565Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:15:18.011930 containerd[1599]: time="2026-01-17T00:15:18.011457886Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:15:18.011930 containerd[1599]: time="2026-01-17T00:15:18.011487746Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:15:18.011930 containerd[1599]: time="2026-01-17T00:15:18.011514535Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:15:18.011930 containerd[1599]: time="2026-01-17T00:15:18.011797809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:15:18.013005 containerd[1599]: time="2026-01-17T00:15:18.012967537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:15:18.013313 containerd[1599]: time="2026-01-17T00:15:18.013287324Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:15:18.013409 containerd[1599]: time="2026-01-17T00:15:18.013390397Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:15:18.013492 containerd[1599]: time="2026-01-17T00:15:18.013473877Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:15:18.013605 containerd[1599]: time="2026-01-17T00:15:18.013584304Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.013698 containerd[1599]: time="2026-01-17T00:15:18.013679847Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.013778 containerd[1599]: time="2026-01-17T00:15:18.013759989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.015904723Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.015948658Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.015972461Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.015998483Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016026915Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016085480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016114280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016141794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016166284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016187907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016230790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016255599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016298000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.017903 containerd[1599]: time="2026-01-17T00:15:18.016325382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016353316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016376703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016397857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016419557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016448497Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016484993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016525460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016556490Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016628141Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016659976Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016681390Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016704234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:15:18.018473 containerd[1599]: time="2026-01-17T00:15:18.016724028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.018965 containerd[1599]: time="2026-01-17T00:15:18.016746390Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:15:18.018965 containerd[1599]: time="2026-01-17T00:15:18.016771746Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:15:18.018965 containerd[1599]: time="2026-01-17T00:15:18.016805169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:15:18.019071 containerd[1599]: time="2026-01-17T00:15:18.017255668Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:15:18.019071 containerd[1599]: time="2026-01-17T00:15:18.017383013Z" level=info msg="Connect containerd service" Jan 17 00:15:18.019071 containerd[1599]: time="2026-01-17T00:15:18.017473472Z" level=info msg="using legacy CRI server" Jan 17 00:15:18.019071 containerd[1599]: time="2026-01-17T00:15:18.017489343Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:15:18.019071 containerd[1599]: time="2026-01-17T00:15:18.017659325Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:15:18.021639 containerd[1599]: time="2026-01-17T00:15:18.021596842Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:15:18.021803 unknown[1644]: wrote ssh authorized keys file for user: core Jan 17 00:15:18.038099 containerd[1599]: time="2026-01-17T00:15:18.030419938Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:15:18.038099 containerd[1599]: time="2026-01-17T00:15:18.030500939Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:15:18.038099 containerd[1599]: time="2026-01-17T00:15:18.030615134Z" level=info msg="Start subscribing containerd event" Jan 17 00:15:18.038099 containerd[1599]: time="2026-01-17T00:15:18.030674571Z" level=info msg="Start recovering state" Jan 17 00:15:18.038099 containerd[1599]: time="2026-01-17T00:15:18.030798500Z" level=info msg="Start event monitor" Jan 17 00:15:18.046202 containerd[1599]: time="2026-01-17T00:15:18.030821647Z" level=info msg="Start snapshots syncer" Jan 17 00:15:18.046202 containerd[1599]: time="2026-01-17T00:15:18.044015120Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:15:18.046202 containerd[1599]: time="2026-01-17T00:15:18.044030382Z" level=info msg="Start streaming server" Jan 17 00:15:18.046202 containerd[1599]: time="2026-01-17T00:15:18.044131015Z" level=info msg="containerd successfully booted in 0.142574s" Jan 17 00:15:18.052921 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:15:18.091193 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:15:18.101086 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:15:18.098891 systemd[1]: Finished sshkeys.service. Jan 17 00:15:18.137704 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:15:18.550897 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:15:18.638104 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:15:18.655077 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:15:18.678152 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:15:18.678414 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:15:18.694175 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:15:18.743290 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:15:18.755487 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:15:18.768461 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:15:18.771577 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:15:19.021507 tar[1585]: linux-amd64/README.md Jan 17 00:15:19.048771 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:15:19.262256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:19.262635 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:19.266714 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:15:19.268697 systemd[1]: Startup finished in 7.798s (kernel) + 6.687s (userspace) = 14.485s. Jan 17 00:15:19.947068 kubelet[1705]: E0117 00:15:19.946995 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:19.950249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:19.950637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:20.811303 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:15:20.821193 systemd[1]: Started sshd@0-159.223.199.43:22-4.153.228.146:46736.service - OpenSSH per-connection server daemon (4.153.228.146:46736). Jan 17 00:15:21.240268 sshd[1717]: Accepted publickey for core from 4.153.228.146 port 46736 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:21.243275 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:21.257969 systemd-logind[1564]: New session 1 of user core. Jan 17 00:15:21.258760 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:15:21.267265 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:15:21.291041 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:15:21.306545 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:15:21.311743 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:15:21.443455 systemd[1723]: Queued start job for default target default.target. Jan 17 00:15:21.444513 systemd[1723]: Created slice app.slice - User Application Slice. Jan 17 00:15:21.444550 systemd[1723]: Reached target paths.target - Paths. Jan 17 00:15:21.444564 systemd[1723]: Reached target timers.target - Timers. Jan 17 00:15:21.459035 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:15:21.468172 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:15:21.468243 systemd[1723]: Reached target sockets.target - Sockets. Jan 17 00:15:21.468259 systemd[1723]: Reached target basic.target - Basic System. Jan 17 00:15:21.468320 systemd[1723]: Reached target default.target - Main User Target. Jan 17 00:15:21.468354 systemd[1723]: Startup finished in 147ms. Jan 17 00:15:21.468922 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:15:21.475402 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:15:21.782283 systemd[1]: Started sshd@1-159.223.199.43:22-4.153.228.146:46750.service - OpenSSH per-connection server daemon (4.153.228.146:46750). Jan 17 00:15:22.188293 sshd[1735]: Accepted publickey for core from 4.153.228.146 port 46750 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:22.190394 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:22.197668 systemd-logind[1564]: New session 2 of user core. Jan 17 00:15:22.208760 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:15:22.484048 sshd[1735]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:22.489091 systemd[1]: sshd@1-159.223.199.43:22-4.153.228.146:46750.service: Deactivated successfully. Jan 17 00:15:22.493373 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:15:22.494378 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:15:22.495694 systemd-logind[1564]: Removed session 2. Jan 17 00:15:22.571421 systemd[1]: Started sshd@2-159.223.199.43:22-4.153.228.146:46752.service - OpenSSH per-connection server daemon (4.153.228.146:46752). Jan 17 00:15:23.053576 sshd[1743]: Accepted publickey for core from 4.153.228.146 port 46752 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:23.055873 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:23.063095 systemd-logind[1564]: New session 3 of user core. Jan 17 00:15:23.075404 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:15:23.380304 sshd[1743]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:23.390985 systemd[1]: sshd@2-159.223.199.43:22-4.153.228.146:46752.service: Deactivated successfully. Jan 17 00:15:23.394221 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:15:23.397290 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:15:23.399289 systemd-logind[1564]: Removed session 3. Jan 17 00:15:23.465319 systemd[1]: Started sshd@3-159.223.199.43:22-4.153.228.146:46754.service - OpenSSH per-connection server daemon (4.153.228.146:46754). Jan 17 00:15:23.948934 sshd[1751]: Accepted publickey for core from 4.153.228.146 port 46754 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:23.951598 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:23.960204 systemd-logind[1564]: New session 4 of user core. Jan 17 00:15:23.971593 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:15:24.283088 sshd[1751]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:24.287609 systemd[1]: sshd@3-159.223.199.43:22-4.153.228.146:46754.service: Deactivated successfully. Jan 17 00:15:24.292501 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:15:24.293577 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:15:24.295269 systemd-logind[1564]: Removed session 4. Jan 17 00:15:24.375344 systemd[1]: Started sshd@4-159.223.199.43:22-4.153.228.146:46758.service - OpenSSH per-connection server daemon (4.153.228.146:46758). Jan 17 00:15:24.841685 sshd[1759]: Accepted publickey for core from 4.153.228.146 port 46758 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:24.843620 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:24.850760 systemd-logind[1564]: New session 5 of user core. Jan 17 00:15:24.860389 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:15:25.122724 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:15:25.123062 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:25.137269 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:25.212255 sshd[1759]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:25.215934 systemd[1]: sshd@4-159.223.199.43:22-4.153.228.146:46758.service: Deactivated successfully. Jan 17 00:15:25.219353 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:15:25.221214 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:15:25.222712 systemd-logind[1564]: Removed session 5. Jan 17 00:15:25.279338 systemd[1]: Started sshd@5-159.223.199.43:22-4.153.228.146:38404.service - OpenSSH per-connection server daemon (4.153.228.146:38404). Jan 17 00:15:25.709666 sshd[1768]: Accepted publickey for core from 4.153.228.146 port 38404 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:25.711743 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:25.717461 systemd-logind[1564]: New session 6 of user core. Jan 17 00:15:25.728530 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:15:25.958685 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:15:25.959261 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:25.964886 sudo[1773]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:25.973413 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:15:25.974357 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:25.999348 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:26.003601 auditctl[1776]: No rules Jan 17 00:15:26.004160 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:15:26.004528 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:26.014389 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:26.061441 augenrules[1795]: No rules Jan 17 00:15:26.063573 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:26.066215 sudo[1772]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:26.136680 sshd[1768]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:26.142232 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:15:26.142928 systemd[1]: sshd@5-159.223.199.43:22-4.153.228.146:38404.service: Deactivated successfully. Jan 17 00:15:26.148168 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:15:26.149674 systemd-logind[1564]: Removed session 6. Jan 17 00:15:26.210222 systemd[1]: Started sshd@6-159.223.199.43:22-4.153.228.146:38406.service - OpenSSH per-connection server daemon (4.153.228.146:38406). Jan 17 00:15:26.638755 sshd[1804]: Accepted publickey for core from 4.153.228.146 port 38406 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:26.640797 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:26.647543 systemd-logind[1564]: New session 7 of user core. Jan 17 00:15:26.660321 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:15:26.884919 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:15:26.885265 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:27.403458 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:15:27.403551 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:15:27.912396 dockerd[1823]: time="2026-01-17T00:15:27.912319252Z" level=info msg="Starting up" Jan 17 00:15:28.187984 systemd[1]: var-lib-docker-metacopy\x2dcheck972131095-merged.mount: Deactivated successfully. Jan 17 00:15:28.217931 dockerd[1823]: time="2026-01-17T00:15:28.217541944Z" level=info msg="Loading containers: start." Jan 17 00:15:28.363869 kernel: Initializing XFRM netlink socket Jan 17 00:15:28.491320 systemd-networkd[1225]: docker0: Link UP Jan 17 00:15:28.514144 dockerd[1823]: time="2026-01-17T00:15:28.514059290Z" level=info msg="Loading containers: done." Jan 17 00:15:28.538425 dockerd[1823]: time="2026-01-17T00:15:28.538318633Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:15:28.538647 dockerd[1823]: time="2026-01-17T00:15:28.538515110Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:15:28.538792 dockerd[1823]: time="2026-01-17T00:15:28.538753761Z" level=info msg="Daemon has completed initialization" Jan 17 00:15:28.603710 dockerd[1823]: time="2026-01-17T00:15:28.603452594Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:15:28.605265 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:15:29.550452 containerd[1599]: time="2026-01-17T00:15:29.550079414Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:15:29.953551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:15:29.960140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:30.170991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:30.186159 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:30.273537 kubelet[1982]: E0117 00:15:30.272927 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:30.279866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:30.280198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:30.409863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850408852.mount: Deactivated successfully. Jan 17 00:15:31.918432 containerd[1599]: time="2026-01-17T00:15:31.918356137Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 17 00:15:31.920162 containerd[1599]: time="2026-01-17T00:15:31.920097256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.924587 containerd[1599]: time="2026-01-17T00:15:31.924511109Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.374381369s" Jan 17 00:15:31.924587 containerd[1599]: time="2026-01-17T00:15:31.924576630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:15:31.925163 containerd[1599]: time="2026-01-17T00:15:31.925107687Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.926350 containerd[1599]: time="2026-01-17T00:15:31.926076797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:15:31.926885 containerd[1599]: time="2026-01-17T00:15:31.926808952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:33.782990 containerd[1599]: time="2026-01-17T00:15:33.781919566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:33.784333 containerd[1599]: time="2026-01-17T00:15:33.784052395Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 17 00:15:33.785145 containerd[1599]: time="2026-01-17T00:15:33.785102540Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:33.789603 containerd[1599]: time="2026-01-17T00:15:33.789554347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:33.790649 containerd[1599]: time="2026-01-17T00:15:33.790604888Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.864476112s" Jan 17 00:15:33.790751 containerd[1599]: time="2026-01-17T00:15:33.790654895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:15:33.791379 containerd[1599]: time="2026-01-17T00:15:33.791345271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:15:35.500821 containerd[1599]: time="2026-01-17T00:15:35.500757788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:35.502866 containerd[1599]: time="2026-01-17T00:15:35.502478162Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 17 00:15:35.503891 containerd[1599]: time="2026-01-17T00:15:35.503810264Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:35.508621 containerd[1599]: time="2026-01-17T00:15:35.506997283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:35.508621 containerd[1599]: time="2026-01-17T00:15:35.508471585Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.717085786s" Jan 17 00:15:35.508621 containerd[1599]: time="2026-01-17T00:15:35.508510268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:15:35.510099 containerd[1599]: time="2026-01-17T00:15:35.510014000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:15:35.748403 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 00:15:37.134728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572342164.mount: Deactivated successfully. Jan 17 00:15:37.883220 containerd[1599]: time="2026-01-17T00:15:37.883164572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:37.884541 containerd[1599]: time="2026-01-17T00:15:37.884272734Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:15:37.884541 containerd[1599]: time="2026-01-17T00:15:37.884477494Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:37.887861 containerd[1599]: time="2026-01-17T00:15:37.886749460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:37.888245 containerd[1599]: time="2026-01-17T00:15:37.888019730Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.377962261s" Jan 17 00:15:37.888345 containerd[1599]: time="2026-01-17T00:15:37.888329785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:15:37.889532 containerd[1599]: time="2026-01-17T00:15:37.889464463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:15:38.663300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977861039.mount: Deactivated successfully. Jan 17 00:15:38.844104 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 00:15:39.845758 containerd[1599]: time="2026-01-17T00:15:39.845672110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:39.847525 containerd[1599]: time="2026-01-17T00:15:39.847147672Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 17 00:15:39.849855 containerd[1599]: time="2026-01-17T00:15:39.848265636Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:39.851784 containerd[1599]: time="2026-01-17T00:15:39.851732234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:39.853565 containerd[1599]: time="2026-01-17T00:15:39.853519378Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.963997832s" Jan 17 00:15:39.853758 containerd[1599]: time="2026-01-17T00:15:39.853731301Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:15:39.854585 containerd[1599]: time="2026-01-17T00:15:39.854537878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:15:40.453434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:15:40.468684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:40.638724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073432006.mount: Deactivated successfully. Jan 17 00:15:40.656890 containerd[1599]: time="2026-01-17T00:15:40.654124766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.658071 containerd[1599]: time="2026-01-17T00:15:40.657968469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:15:40.659251 containerd[1599]: time="2026-01-17T00:15:40.659172131Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.664487 containerd[1599]: time="2026-01-17T00:15:40.663570439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.665514 containerd[1599]: time="2026-01-17T00:15:40.665450923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 810.856986ms" Jan 17 00:15:40.665620 containerd[1599]: time="2026-01-17T00:15:40.665517409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:15:40.667757 containerd[1599]: time="2026-01-17T00:15:40.666674440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:15:40.689514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:40.706584 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:40.780453 kubelet[2133]: E0117 00:15:40.780374 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:40.783378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:40.783627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:41.482367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032202633.mount: Deactivated successfully. Jan 17 00:15:43.633948 containerd[1599]: time="2026-01-17T00:15:43.633882594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:43.635685 containerd[1599]: time="2026-01-17T00:15:43.635448296Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 17 00:15:43.638092 containerd[1599]: time="2026-01-17T00:15:43.637895423Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:43.644900 containerd[1599]: time="2026-01-17T00:15:43.644781166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:43.646586 containerd[1599]: time="2026-01-17T00:15:43.646395261Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.979664518s" Jan 17 00:15:43.646586 containerd[1599]: time="2026-01-17T00:15:43.646444558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:15:46.418491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:46.427449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:46.473060 systemd[1]: Reloading requested from client PID 2224 ('systemctl') (unit session-7.scope)... Jan 17 00:15:46.473246 systemd[1]: Reloading... Jan 17 00:15:46.615878 zram_generator::config[2266]: No configuration found. Jan 17 00:15:46.809777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:46.902799 systemd[1]: Reloading finished in 429 ms. Jan 17 00:15:46.955493 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:15:46.955569 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:15:46.958085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:46.965290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:47.116053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:47.127524 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:15:47.192905 kubelet[2330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:47.192905 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:15:47.192905 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:47.193540 kubelet[2330]: I0117 00:15:47.193000 2330 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:15:47.662288 kubelet[2330]: I0117 00:15:47.662198 2330 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:15:47.662288 kubelet[2330]: I0117 00:15:47.662254 2330 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:15:47.664485 kubelet[2330]: I0117 00:15:47.662675 2330 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:15:47.710933 kubelet[2330]: E0117 00:15:47.708794 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://159.223.199.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:47.715884 kubelet[2330]: I0117 00:15:47.715530 2330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:15:47.733396 kubelet[2330]: E0117 00:15:47.733342 2330 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:15:47.733849 kubelet[2330]: I0117 00:15:47.733686 2330 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:15:47.742405 kubelet[2330]: I0117 00:15:47.741920 2330 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:15:47.746939 kubelet[2330]: I0117 00:15:47.746014 2330 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:15:47.746939 kubelet[2330]: I0117 00:15:47.746095 2330 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-cccb0c3e85","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:15:47.746939 kubelet[2330]: I0117 00:15:47.746391 2330 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:15:47.746939 kubelet[2330]: I0117 00:15:47.746409 2330 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:15:47.748556 kubelet[2330]: I0117 00:15:47.748518 2330 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:47.753090 kubelet[2330]: I0117 00:15:47.753047 2330 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:15:47.753297 kubelet[2330]: I0117 00:15:47.753284 2330 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:15:47.753372 kubelet[2330]: I0117 00:15:47.753365 2330 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:15:47.753421 kubelet[2330]: I0117 00:15:47.753414 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:15:47.767892 kubelet[2330]: I0117 00:15:47.767797 2330 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:15:47.772633 kubelet[2330]: I0117 00:15:47.772570 2330 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:15:47.775321 kubelet[2330]: W0117 00:15:47.773800 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:15:47.775321 kubelet[2330]: I0117 00:15:47.774740 2330 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:15:47.775321 kubelet[2330]: I0117 00:15:47.774791 2330 server.go:1287] "Started kubelet" Jan 17 00:15:47.775321 kubelet[2330]: W0117 00:15:47.775194 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.199.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:47.775658 kubelet[2330]: E0117 00:15:47.775336 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.199.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:47.775658 kubelet[2330]: W0117 00:15:47.775473 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.199.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-cccb0c3e85&limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:47.775658 kubelet[2330]: E0117 00:15:47.775527 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.199.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-cccb0c3e85&limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:47.790247 kubelet[2330]: I0117 00:15:47.789183 2330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:15:47.790247 kubelet[2330]: I0117 00:15:47.790126 2330 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:15:47.794779 kubelet[2330]: E0117 00:15:47.789621 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.223.199.43:6443/api/v1/namespaces/default/events\": dial tcp 159.223.199.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-cccb0c3e85.188b5c77aeb11d95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-cccb0c3e85,UID:ci-4081.3.6-n-cccb0c3e85,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-cccb0c3e85,},FirstTimestamp:2026-01-17 00:15:47.774758293 +0000 UTC m=+0.642332792,LastTimestamp:2026-01-17 00:15:47.774758293 +0000 UTC m=+0.642332792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-cccb0c3e85,}" Jan 17 00:15:47.794779 kubelet[2330]: I0117 00:15:47.792613 2330 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:15:47.796016 kubelet[2330]: I0117 00:15:47.795954 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:15:47.804061 kubelet[2330]: I0117 00:15:47.802282 2330 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:15:47.804583 kubelet[2330]: I0117 00:15:47.804554 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:15:47.811344 kubelet[2330]: I0117 00:15:47.806684 2330 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:15:47.811925 kubelet[2330]: I0117 00:15:47.806735 2330 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:15:47.811925 kubelet[2330]: E0117 00:15:47.807239 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" Jan 17 00:15:47.812108 kubelet[2330]: I0117 00:15:47.812006 2330 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:15:47.813368 kubelet[2330]: W0117 00:15:47.813284 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.199.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:47.813474 kubelet[2330]: E0117 00:15:47.813387 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.199.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:47.813565 kubelet[2330]: E0117 00:15:47.813504 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.199.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cccb0c3e85?timeout=10s\": dial tcp 159.223.199.43:6443: connect: connection refused" interval="200ms" Jan 17 00:15:47.815992 kubelet[2330]: I0117 00:15:47.815943 2330 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:15:47.817679 kubelet[2330]: I0117 00:15:47.817618 2330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:15:47.826870 kubelet[2330]: I0117 00:15:47.826494 2330 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:15:47.847643 kubelet[2330]: I0117 00:15:47.847581 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:15:47.861941 kubelet[2330]: I0117 00:15:47.861895 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:15:47.861941 kubelet[2330]: I0117 00:15:47.861936 2330 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:15:47.862142 kubelet[2330]: I0117 00:15:47.861963 2330 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:15:47.862142 kubelet[2330]: I0117 00:15:47.861975 2330 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:15:47.862142 kubelet[2330]: E0117 00:15:47.862043 2330 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:15:47.865300 kubelet[2330]: W0117 00:15:47.865049 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.199.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:47.865300 kubelet[2330]: E0117 00:15:47.865125 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.199.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:47.868944 kubelet[2330]: I0117 00:15:47.868412 2330 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:15:47.868944 kubelet[2330]: I0117 00:15:47.868439 2330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:15:47.868944 kubelet[2330]: I0117 00:15:47.868468 2330 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:47.876269 kubelet[2330]: I0117 00:15:47.875814 2330 policy_none.go:49] "None policy: Start" Jan 17 00:15:47.876269 kubelet[2330]: I0117 00:15:47.875877 2330 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:15:47.876269 kubelet[2330]: I0117 00:15:47.875899 2330 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:15:47.886867 kubelet[2330]: I0117 00:15:47.885562 2330 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:15:47.886867 kubelet[2330]: I0117 00:15:47.885805 2330 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:15:47.886867 kubelet[2330]: I0117 00:15:47.885822 2330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:15:47.888074 kubelet[2330]: I0117 00:15:47.888047 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:15:47.889442 kubelet[2330]: E0117 00:15:47.889405 2330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:15:47.889520 kubelet[2330]: E0117 00:15:47.889466 2330 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-cccb0c3e85\" not found" Jan 17 00:15:47.968871 kubelet[2330]: E0117 00:15:47.968055 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:47.970844 kubelet[2330]: E0117 00:15:47.970783 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:47.974078 kubelet[2330]: E0117 00:15:47.974038 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:47.988239 kubelet[2330]: I0117 00:15:47.988185 2330 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:47.988613 kubelet[2330]: E0117 00:15:47.988579 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.199.43:6443/api/v1/nodes\": dial tcp 159.223.199.43:6443: connect: connection refused" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.014193 kubelet[2330]: E0117 00:15:48.014134 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.199.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cccb0c3e85?timeout=10s\": dial tcp 159.223.199.43:6443: connect: connection refused" interval="400ms" Jan 17 00:15:48.113111 kubelet[2330]: I0117 00:15:48.113032 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/859a9108ed029905058d45cda7c60749-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cccb0c3e85\" (UID: \"859a9108ed029905058d45cda7c60749\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113111 kubelet[2330]: I0117 00:15:48.113106 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/859a9108ed029905058d45cda7c60749-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-cccb0c3e85\" (UID: \"859a9108ed029905058d45cda7c60749\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113111 kubelet[2330]: I0117 00:15:48.113141 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113361 kubelet[2330]: I0117 00:15:48.113160 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113361 kubelet[2330]: I0117 00:15:48.113182 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113361 kubelet[2330]: I0117 00:15:48.113209 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d897ebb527d1492d4996bebfb195a42-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-cccb0c3e85\" (UID: \"4d897ebb527d1492d4996bebfb195a42\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113361 kubelet[2330]: I0117 00:15:48.113237 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/859a9108ed029905058d45cda7c60749-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cccb0c3e85\" (UID: \"859a9108ed029905058d45cda7c60749\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113361 kubelet[2330]: I0117 00:15:48.113266 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.113521 kubelet[2330]: I0117 00:15:48.113286 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.191670 kubelet[2330]: I0117 00:15:48.190900 2330 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.191670 kubelet[2330]: E0117 00:15:48.191540 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.199.43:6443/api/v1/nodes\": dial tcp 159.223.199.43:6443: connect: connection refused" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.270404 kubelet[2330]: E0117 00:15:48.270257 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:48.273700 kubelet[2330]: E0117 00:15:48.272334 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:48.273860 containerd[1599]: time="2026-01-17T00:15:48.273318755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-cccb0c3e85,Uid:40bb3dcea3c9ff743f283ef0d1415705,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:48.276463 kubelet[2330]: E0117 00:15:48.276410 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:48.281208 containerd[1599]: time="2026-01-17T00:15:48.281125512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-cccb0c3e85,Uid:4d897ebb527d1492d4996bebfb195a42,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:48.281673 containerd[1599]: time="2026-01-17T00:15:48.281622200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-cccb0c3e85,Uid:859a9108ed029905058d45cda7c60749,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:48.284578 systemd-resolved[1480]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 00:15:48.415695 kubelet[2330]: E0117 00:15:48.415618 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.199.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cccb0c3e85?timeout=10s\": dial tcp 159.223.199.43:6443: connect: connection refused" interval="800ms" Jan 17 00:15:48.593444 kubelet[2330]: I0117 00:15:48.593222 2330 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.594391 kubelet[2330]: E0117 00:15:48.594333 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.199.43:6443/api/v1/nodes\": dial tcp 159.223.199.43:6443: connect: connection refused" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:48.937936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558325955.mount: Deactivated successfully. Jan 17 00:15:48.948536 containerd[1599]: time="2026-01-17T00:15:48.948451446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:48.950009 containerd[1599]: time="2026-01-17T00:15:48.949947503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:48.951745 containerd[1599]: time="2026-01-17T00:15:48.951653056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:15:48.952043 containerd[1599]: time="2026-01-17T00:15:48.952004471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:15:48.952165 containerd[1599]: time="2026-01-17T00:15:48.952133483Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:48.953307 containerd[1599]: time="2026-01-17T00:15:48.953258781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:15:48.953783 containerd[1599]: time="2026-01-17T00:15:48.953711257Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:48.959151 containerd[1599]: time="2026-01-17T00:15:48.959073638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:48.960938 containerd[1599]: time="2026-01-17T00:15:48.959373042Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.67692ms" Jan 17 00:15:48.961914 containerd[1599]: time="2026-01-17T00:15:48.961108923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.835286ms" Jan 17 00:15:48.964855 containerd[1599]: time="2026-01-17T00:15:48.964786026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 691.385829ms" Jan 17 00:15:49.008100 kubelet[2330]: W0117 00:15:49.007934 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.199.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:49.008100 kubelet[2330]: E0117 00:15:49.008048 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.199.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:49.120982 kubelet[2330]: W0117 00:15:49.120863 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.199.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:49.120982 kubelet[2330]: E0117 00:15:49.120945 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.199.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:49.138477 containerd[1599]: time="2026-01-17T00:15:49.138100724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:49.138477 containerd[1599]: time="2026-01-17T00:15:49.138164574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:49.138477 containerd[1599]: time="2026-01-17T00:15:49.138176162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:49.138477 containerd[1599]: time="2026-01-17T00:15:49.138273457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:49.141301 containerd[1599]: time="2026-01-17T00:15:49.141193262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:49.141301 containerd[1599]: time="2026-01-17T00:15:49.141256997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:49.141573 containerd[1599]: time="2026-01-17T00:15:49.141535835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:49.144045 containerd[1599]: time="2026-01-17T00:15:49.142743917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:49.150675 containerd[1599]: time="2026-01-17T00:15:49.150526530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:49.151929 containerd[1599]: time="2026-01-17T00:15:49.150632121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:49.152055 containerd[1599]: time="2026-01-17T00:15:49.151902489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:49.154296 containerd[1599]: time="2026-01-17T00:15:49.154212134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:49.218031 kubelet[2330]: E0117 00:15:49.217972 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.199.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cccb0c3e85?timeout=10s\": dial tcp 159.223.199.43:6443: connect: connection refused" interval="1.6s" Jan 17 00:15:49.264735 containerd[1599]: time="2026-01-17T00:15:49.264581601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-cccb0c3e85,Uid:859a9108ed029905058d45cda7c60749,Namespace:kube-system,Attempt:0,} returns sandbox id \"878e670c07717698ff8456f05650ca83d0307ae9e4db65690f8ff42613a0505f\"" Jan 17 00:15:49.268891 kubelet[2330]: E0117 00:15:49.268642 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:49.273277 containerd[1599]: time="2026-01-17T00:15:49.273239128Z" level=info msg="CreateContainer within sandbox \"878e670c07717698ff8456f05650ca83d0307ae9e4db65690f8ff42613a0505f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:15:49.286881 containerd[1599]: time="2026-01-17T00:15:49.286780401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-cccb0c3e85,Uid:4d897ebb527d1492d4996bebfb195a42,Namespace:kube-system,Attempt:0,} returns sandbox id \"900aca4716836e8c16c1463137268eab4dd43fdb0238989e9798bfe87b5f3440\"" Jan 17 00:15:49.288422 kubelet[2330]: E0117 00:15:49.288391 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:49.291763 containerd[1599]: time="2026-01-17T00:15:49.291718511Z" level=info msg="CreateContainer within sandbox \"900aca4716836e8c16c1463137268eab4dd43fdb0238989e9798bfe87b5f3440\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:15:49.302244 containerd[1599]: time="2026-01-17T00:15:49.302156243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-cccb0c3e85,Uid:40bb3dcea3c9ff743f283ef0d1415705,Namespace:kube-system,Attempt:0,} returns sandbox id \"08adcd5849ad209b9ce100747882baff514766a4a4dbc5dd3f88a3ad0a2179fe\"" Jan 17 00:15:49.303597 kubelet[2330]: E0117 00:15:49.303387 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:49.306946 containerd[1599]: time="2026-01-17T00:15:49.306886019Z" level=info msg="CreateContainer within sandbox \"08adcd5849ad209b9ce100747882baff514766a4a4dbc5dd3f88a3ad0a2179fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:15:49.308396 containerd[1599]: time="2026-01-17T00:15:49.308362668Z" level=info msg="CreateContainer within sandbox \"878e670c07717698ff8456f05650ca83d0307ae9e4db65690f8ff42613a0505f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1b807202f2def83008a63a2c0be5d72ced595f7b9068c4f73d60e93a9b2a150\"" Jan 17 00:15:49.309713 containerd[1599]: time="2026-01-17T00:15:49.309669599Z" level=info msg="StartContainer for \"c1b807202f2def83008a63a2c0be5d72ced595f7b9068c4f73d60e93a9b2a150\"" Jan 17 00:15:49.311457 containerd[1599]: time="2026-01-17T00:15:49.311406306Z" level=info msg="CreateContainer within sandbox \"900aca4716836e8c16c1463137268eab4dd43fdb0238989e9798bfe87b5f3440\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"708c1ab529d645b556d49f90cc5d1a6eee83d37b8566c213b37a5a6586298a49\"" Jan 17 00:15:49.312614 containerd[1599]: time="2026-01-17T00:15:49.312586036Z" level=info msg="StartContainer for \"708c1ab529d645b556d49f90cc5d1a6eee83d37b8566c213b37a5a6586298a49\"" Jan 17 00:15:49.316921 kubelet[2330]: W0117 00:15:49.316671 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.199.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-cccb0c3e85&limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:49.316921 kubelet[2330]: E0117 00:15:49.316783 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.199.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-cccb0c3e85&limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:49.327330 containerd[1599]: time="2026-01-17T00:15:49.327177225Z" level=info msg="CreateContainer within sandbox \"08adcd5849ad209b9ce100747882baff514766a4a4dbc5dd3f88a3ad0a2179fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a796534721d76e8fa9f7b60df69e16e649d1e0431c8c7b29dc25caddf30db19\"" Jan 17 00:15:49.328357 containerd[1599]: time="2026-01-17T00:15:49.328172014Z" level=info msg="StartContainer for \"7a796534721d76e8fa9f7b60df69e16e649d1e0431c8c7b29dc25caddf30db19\"" Jan 17 00:15:49.349472 kubelet[2330]: W0117 00:15:49.349324 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.199.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.199.43:6443: connect: connection refused Jan 17 00:15:49.349472 kubelet[2330]: E0117 00:15:49.349427 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.199.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.199.43:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:49.402015 kubelet[2330]: I0117 00:15:49.400801 2330 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:49.402015 kubelet[2330]: E0117 00:15:49.401173 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.199.43:6443/api/v1/nodes\": dial tcp 159.223.199.43:6443: connect: connection refused" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:49.478183 containerd[1599]: time="2026-01-17T00:15:49.476329389Z" level=info msg="StartContainer for \"c1b807202f2def83008a63a2c0be5d72ced595f7b9068c4f73d60e93a9b2a150\" returns successfully" Jan 17 00:15:49.478183 containerd[1599]: time="2026-01-17T00:15:49.476446496Z" level=info msg="StartContainer for \"708c1ab529d645b556d49f90cc5d1a6eee83d37b8566c213b37a5a6586298a49\" returns successfully" Jan 17 00:15:49.516055 containerd[1599]: time="2026-01-17T00:15:49.515998877Z" level=info msg="StartContainer for \"7a796534721d76e8fa9f7b60df69e16e649d1e0431c8c7b29dc25caddf30db19\" returns successfully" Jan 17 00:15:49.883619 kubelet[2330]: E0117 00:15:49.881264 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:49.883619 kubelet[2330]: E0117 00:15:49.881469 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:49.891996 kubelet[2330]: E0117 00:15:49.891861 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:49.892981 kubelet[2330]: E0117 00:15:49.892073 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:49.894535 kubelet[2330]: E0117 00:15:49.894509 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:49.896242 kubelet[2330]: E0117 00:15:49.896139 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:50.898314 kubelet[2330]: E0117 00:15:50.897396 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:50.898314 kubelet[2330]: E0117 00:15:50.897587 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:50.898314 kubelet[2330]: E0117 00:15:50.897955 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:50.898314 kubelet[2330]: E0117 00:15:50.898052 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:51.004896 kubelet[2330]: I0117 00:15:51.003300 2330 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.343497 kubelet[2330]: E0117 00:15:52.343456 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.344447 kubelet[2330]: E0117 00:15:52.344131 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:52.490874 kubelet[2330]: E0117 00:15:52.490790 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-cccb0c3e85\" not found" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.518913 kubelet[2330]: E0117 00:15:52.518127 2330 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-cccb0c3e85.188b5c77aeb11d95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-cccb0c3e85,UID:ci-4081.3.6-n-cccb0c3e85,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-cccb0c3e85,},FirstTimestamp:2026-01-17 00:15:47.774758293 +0000 UTC m=+0.642332792,LastTimestamp:2026-01-17 00:15:47.774758293 +0000 UTC m=+0.642332792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-cccb0c3e85,}" Jan 17 00:15:52.578701 kubelet[2330]: E0117 00:15:52.577604 2330 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-cccb0c3e85.188b5c77aff50126 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-cccb0c3e85,UID:ci-4081.3.6-n-cccb0c3e85,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-cccb0c3e85,},FirstTimestamp:2026-01-17 00:15:47.795984678 +0000 UTC m=+0.663559163,LastTimestamp:2026-01-17 00:15:47.795984678 +0000 UTC m=+0.663559163,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-cccb0c3e85,}" Jan 17 00:15:52.616165 kubelet[2330]: I0117 00:15:52.610637 2330 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.616165 kubelet[2330]: I0117 00:15:52.611163 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.638858 kubelet[2330]: E0117 00:15:52.637761 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.638858 kubelet[2330]: I0117 00:15:52.637891 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.644141 kubelet[2330]: E0117 00:15:52.644096 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-cccb0c3e85\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.644141 kubelet[2330]: I0117 00:15:52.644131 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.647624 kubelet[2330]: E0117 00:15:52.647580 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-cccb0c3e85\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:52.761645 kubelet[2330]: I0117 00:15:52.760913 2330 apiserver.go:52] "Watching apiserver" Jan 17 00:15:52.812864 kubelet[2330]: I0117 00:15:52.812798 2330 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:15:53.141733 kubelet[2330]: I0117 00:15:53.141663 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:53.144542 kubelet[2330]: E0117 00:15:53.144483 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:53.144780 kubelet[2330]: E0117 00:15:53.144757 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:54.603865 systemd[1]: Reloading requested from client PID 2599 ('systemctl') (unit session-7.scope)... Jan 17 00:15:54.603894 systemd[1]: Reloading... Jan 17 00:15:54.716774 zram_generator::config[2634]: No configuration found. Jan 17 00:15:54.870404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:54.963630 systemd[1]: Reloading finished in 358 ms. Jan 17 00:15:55.002748 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:55.014584 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:15:55.015357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:55.029417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:55.197108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:55.210428 (kubelet)[2699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:15:55.277935 kubelet[2699]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:55.279852 kubelet[2699]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:15:55.279852 kubelet[2699]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:55.279852 kubelet[2699]: I0117 00:15:55.278484 2699 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:15:55.287671 kubelet[2699]: I0117 00:15:55.287612 2699 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:15:55.287671 kubelet[2699]: I0117 00:15:55.287651 2699 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:15:55.287987 kubelet[2699]: I0117 00:15:55.287967 2699 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:15:55.291180 kubelet[2699]: I0117 00:15:55.291136 2699 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:15:55.301170 kubelet[2699]: I0117 00:15:55.301042 2699 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:15:55.306933 kubelet[2699]: E0117 00:15:55.305065 2699 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:15:55.306933 kubelet[2699]: I0117 00:15:55.305096 2699 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:15:55.310521 kubelet[2699]: I0117 00:15:55.309159 2699 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:15:55.310521 kubelet[2699]: I0117 00:15:55.309738 2699 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:15:55.310521 kubelet[2699]: I0117 00:15:55.309791 2699 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-cccb0c3e85","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:15:55.310521 kubelet[2699]: I0117 00:15:55.310115 2699 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:15:55.310874 kubelet[2699]: I0117 00:15:55.310125 2699 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:15:55.310874 kubelet[2699]: I0117 00:15:55.310185 2699 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:55.311308 kubelet[2699]: I0117 00:15:55.311287 2699 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:15:55.311410 kubelet[2699]: I0117 00:15:55.311401 2699 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:15:55.311471 kubelet[2699]: I0117 00:15:55.311465 2699 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:15:55.311533 kubelet[2699]: I0117 00:15:55.311526 2699 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:15:55.314199 kubelet[2699]: I0117 00:15:55.314169 2699 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:15:55.314586 kubelet[2699]: I0117 00:15:55.314570 2699 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:15:55.315378 kubelet[2699]: I0117 00:15:55.315352 2699 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:15:55.315424 kubelet[2699]: I0117 00:15:55.315406 2699 server.go:1287] "Started kubelet" Jan 17 00:15:55.317638 kubelet[2699]: I0117 00:15:55.317612 2699 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:15:55.328701 kubelet[2699]: I0117 00:15:55.328650 2699 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:15:55.336402 kubelet[2699]: I0117 00:15:55.336347 2699 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:15:55.338149 kubelet[2699]: I0117 00:15:55.329635 2699 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:15:55.344145 kubelet[2699]: I0117 00:15:55.328896 2699 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:15:55.344725 kubelet[2699]: I0117 00:15:55.344705 2699 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:15:55.344936 kubelet[2699]: E0117 00:15:55.332144 2699 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cccb0c3e85\" not found" Jan 17 00:15:55.345129 kubelet[2699]: I0117 00:15:55.331580 2699 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:15:55.345776 kubelet[2699]: I0117 00:15:55.331597 2699 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:15:55.346005 kubelet[2699]: I0117 00:15:55.345993 2699 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:15:55.348733 kubelet[2699]: I0117 00:15:55.348711 2699 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:15:55.349208 kubelet[2699]: I0117 00:15:55.349186 2699 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:15:55.361915 kubelet[2699]: I0117 00:15:55.361872 2699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:15:55.363600 kubelet[2699]: I0117 00:15:55.363575 2699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:15:55.363873 kubelet[2699]: I0117 00:15:55.363771 2699 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:15:55.363873 kubelet[2699]: I0117 00:15:55.363800 2699 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:15:55.363873 kubelet[2699]: I0117 00:15:55.363807 2699 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:15:55.364015 kubelet[2699]: E0117 00:15:55.364000 2699 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:15:55.368062 kubelet[2699]: I0117 00:15:55.367514 2699 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:15:55.368771 kubelet[2699]: E0117 00:15:55.368750 2699 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:15:55.441134 kubelet[2699]: I0117 00:15:55.441105 2699 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:15:55.441450 kubelet[2699]: I0117 00:15:55.441432 2699 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:15:55.441532 kubelet[2699]: I0117 00:15:55.441525 2699 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:55.441766 kubelet[2699]: I0117 00:15:55.441752 2699 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:15:55.441858 kubelet[2699]: I0117 00:15:55.441812 2699 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:15:55.441924 kubelet[2699]: I0117 00:15:55.441917 2699 policy_none.go:49] "None policy: Start" Jan 17 00:15:55.441968 kubelet[2699]: I0117 00:15:55.441962 2699 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:15:55.442009 kubelet[2699]: I0117 00:15:55.442003 2699 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:15:55.442162 kubelet[2699]: I0117 00:15:55.442153 2699 state_mem.go:75] "Updated machine memory state" Jan 17 00:15:55.444008 kubelet[2699]: I0117 00:15:55.443973 2699 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:15:55.444332 kubelet[2699]: I0117 00:15:55.444315 2699 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:15:55.444464 kubelet[2699]: I0117 00:15:55.444424 2699 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:15:55.445749 kubelet[2699]: I0117 00:15:55.445359 2699 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:15:55.450171 kubelet[2699]: E0117 00:15:55.448489 2699 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:15:55.468014 kubelet[2699]: I0117 00:15:55.467961 2699 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.470506 kubelet[2699]: I0117 00:15:55.470461 2699 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.471414 kubelet[2699]: I0117 00:15:55.471380 2699 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.478069 kubelet[2699]: W0117 00:15:55.477796 2699 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:55.481061 kubelet[2699]: W0117 00:15:55.480934 2699 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:55.482241 kubelet[2699]: W0117 00:15:55.482194 2699 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:55.545973 kubelet[2699]: I0117 00:15:55.545893 2699 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.546756 kubelet[2699]: I0117 00:15:55.546386 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/859a9108ed029905058d45cda7c60749-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-cccb0c3e85\" (UID: \"859a9108ed029905058d45cda7c60749\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.546756 kubelet[2699]: I0117 00:15:55.546442 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.546756 kubelet[2699]: I0117 00:15:55.546480 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.546756 kubelet[2699]: I0117 00:15:55.546514 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/859a9108ed029905058d45cda7c60749-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cccb0c3e85\" (UID: \"859a9108ed029905058d45cda7c60749\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.546756 kubelet[2699]: I0117 00:15:55.546541 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/859a9108ed029905058d45cda7c60749-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cccb0c3e85\" (UID: \"859a9108ed029905058d45cda7c60749\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.547186 kubelet[2699]: I0117 00:15:55.546569 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.547186 kubelet[2699]: I0117 00:15:55.546598 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.547186 kubelet[2699]: I0117 00:15:55.546625 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40bb3dcea3c9ff743f283ef0d1415705-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-cccb0c3e85\" (UID: \"40bb3dcea3c9ff743f283ef0d1415705\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.547186 kubelet[2699]: I0117 00:15:55.546651 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d897ebb527d1492d4996bebfb195a42-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-cccb0c3e85\" (UID: \"4d897ebb527d1492d4996bebfb195a42\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.555596 kubelet[2699]: I0117 00:15:55.555539 2699 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.556030 kubelet[2699]: I0117 00:15:55.555803 2699 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:15:55.780292 kubelet[2699]: E0117 00:15:55.778561 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:55.782497 kubelet[2699]: E0117 00:15:55.782469 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:55.782952 kubelet[2699]: E0117 00:15:55.782754 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:56.325320 kubelet[2699]: I0117 00:15:56.325227 2699 apiserver.go:52] "Watching apiserver" Jan 17 00:15:56.346589 kubelet[2699]: I0117 00:15:56.346524 2699 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:15:56.401599 kubelet[2699]: E0117 00:15:56.401564 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:56.404815 kubelet[2699]: E0117 00:15:56.404780 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:56.406245 kubelet[2699]: E0117 00:15:56.406201 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:56.422030 kubelet[2699]: I0117 00:15:56.421601 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cccb0c3e85" podStartSLOduration=1.421580163 podStartE2EDuration="1.421580163s" podCreationTimestamp="2026-01-17 00:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:56.407988854 +0000 UTC m=+1.188419581" watchObservedRunningTime="2026-01-17 00:15:56.421580163 +0000 UTC m=+1.202010883" Jan 17 00:15:56.435664 kubelet[2699]: I0117 00:15:56.434854 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cccb0c3e85" podStartSLOduration=1.434813989 podStartE2EDuration="1.434813989s" podCreationTimestamp="2026-01-17 00:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:56.421822053 +0000 UTC m=+1.202252782" watchObservedRunningTime="2026-01-17 00:15:56.434813989 +0000 UTC m=+1.215244702" Jan 17 00:15:56.446398 kubelet[2699]: I0117 00:15:56.446325 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cccb0c3e85" podStartSLOduration=1.446306727 podStartE2EDuration="1.446306727s" podCreationTimestamp="2026-01-17 00:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:56.435193554 +0000 UTC m=+1.215624283" watchObservedRunningTime="2026-01-17 00:15:56.446306727 +0000 UTC m=+1.226737450" Jan 17 00:15:57.403603 kubelet[2699]: E0117 00:15:57.403559 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:57.405234 kubelet[2699]: E0117 00:15:57.404312 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:57.405234 kubelet[2699]: E0117 00:15:57.404636 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:00.153485 kubelet[2699]: E0117 00:16:00.153072 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:00.410695 kubelet[2699]: E0117 00:16:00.410444 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:00.945548 kubelet[2699]: I0117 00:16:00.945489 2699 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:16:00.946607 containerd[1599]: time="2026-01-17T00:16:00.945961195Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:16:00.947314 kubelet[2699]: I0117 00:16:00.946222 2699 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:16:01.690372 kubelet[2699]: I0117 00:16:01.690221 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c68fa892-9b4d-41f1-ba60-775f8c9a2d1f-xtables-lock\") pod \"kube-proxy-hkwhl\" (UID: \"c68fa892-9b4d-41f1-ba60-775f8c9a2d1f\") " pod="kube-system/kube-proxy-hkwhl" Jan 17 00:16:01.690372 kubelet[2699]: I0117 00:16:01.690343 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c68fa892-9b4d-41f1-ba60-775f8c9a2d1f-lib-modules\") pod \"kube-proxy-hkwhl\" (UID: \"c68fa892-9b4d-41f1-ba60-775f8c9a2d1f\") " pod="kube-system/kube-proxy-hkwhl" Jan 17 00:16:01.690372 kubelet[2699]: I0117 00:16:01.690380 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c68fa892-9b4d-41f1-ba60-775f8c9a2d1f-kube-proxy\") pod \"kube-proxy-hkwhl\" (UID: \"c68fa892-9b4d-41f1-ba60-775f8c9a2d1f\") " pod="kube-system/kube-proxy-hkwhl" Jan 17 00:16:01.691415 kubelet[2699]: I0117 00:16:01.690455 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2gf6\" (UniqueName: \"kubernetes.io/projected/c68fa892-9b4d-41f1-ba60-775f8c9a2d1f-kube-api-access-k2gf6\") pod \"kube-proxy-hkwhl\" (UID: \"c68fa892-9b4d-41f1-ba60-775f8c9a2d1f\") " pod="kube-system/kube-proxy-hkwhl" Jan 17 00:16:01.964607 kubelet[2699]: E0117 00:16:01.964389 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:01.967950 containerd[1599]: time="2026-01-17T00:16:01.966044503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hkwhl,Uid:c68fa892-9b4d-41f1-ba60-775f8c9a2d1f,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:02.032718 containerd[1599]: time="2026-01-17T00:16:02.031210532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:02.032718 containerd[1599]: time="2026-01-17T00:16:02.031281039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:02.032718 containerd[1599]: time="2026-01-17T00:16:02.031311486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:02.032718 containerd[1599]: time="2026-01-17T00:16:02.031523690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:02.088572 kubelet[2699]: E0117 00:16:02.088088 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:02.163027 containerd[1599]: time="2026-01-17T00:16:02.162600624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hkwhl,Uid:c68fa892-9b4d-41f1-ba60-775f8c9a2d1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"36bd5bd9f9a2861122565899d5afc4f5c005401bdacc5b402f6c0879128fcbe2\"" Jan 17 00:16:02.166134 kubelet[2699]: E0117 00:16:02.164451 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:02.172737 containerd[1599]: time="2026-01-17T00:16:02.172608053Z" level=info msg="CreateContainer within sandbox \"36bd5bd9f9a2861122565899d5afc4f5c005401bdacc5b402f6c0879128fcbe2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:16:02.193913 kubelet[2699]: I0117 00:16:02.193418 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7975e72c-51ab-4426-a2e3-4746c66576b9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jv58m\" (UID: \"7975e72c-51ab-4426-a2e3-4746c66576b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-jv58m" Jan 17 00:16:02.193913 kubelet[2699]: I0117 00:16:02.193482 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qfxx\" (UniqueName: \"kubernetes.io/projected/7975e72c-51ab-4426-a2e3-4746c66576b9-kube-api-access-9qfxx\") pod \"tigera-operator-7dcd859c48-jv58m\" (UID: \"7975e72c-51ab-4426-a2e3-4746c66576b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-jv58m" Jan 17 00:16:02.201211 containerd[1599]: time="2026-01-17T00:16:02.200983371Z" level=info msg="CreateContainer within sandbox \"36bd5bd9f9a2861122565899d5afc4f5c005401bdacc5b402f6c0879128fcbe2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48fe33cf01bc54282130f31654dff02e9cfb297da8b95ced681726cabd8a03af\"" Jan 17 00:16:02.201967 containerd[1599]: time="2026-01-17T00:16:02.201909127Z" level=info msg="StartContainer for \"48fe33cf01bc54282130f31654dff02e9cfb297da8b95ced681726cabd8a03af\"" Jan 17 00:16:02.312856 containerd[1599]: time="2026-01-17T00:16:02.312632561Z" level=info msg="StartContainer for \"48fe33cf01bc54282130f31654dff02e9cfb297da8b95ced681726cabd8a03af\" returns successfully" Jan 17 00:16:02.398878 containerd[1599]: time="2026-01-17T00:16:02.398141138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jv58m,Uid:7975e72c-51ab-4426-a2e3-4746c66576b9,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:16:02.422346 kubelet[2699]: E0117 00:16:02.420643 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:02.422346 kubelet[2699]: E0117 00:16:02.420840 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:02.484488 kubelet[2699]: I0117 00:16:02.484176 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hkwhl" podStartSLOduration=1.484147243 podStartE2EDuration="1.484147243s" podCreationTimestamp="2026-01-17 00:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:02.462592233 +0000 UTC m=+7.243022984" watchObservedRunningTime="2026-01-17 00:16:02.484147243 +0000 UTC m=+7.264577977" Jan 17 00:16:02.493241 containerd[1599]: time="2026-01-17T00:16:02.493051090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:02.493241 containerd[1599]: time="2026-01-17T00:16:02.493151935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:02.493241 containerd[1599]: time="2026-01-17T00:16:02.493192019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:02.495274 containerd[1599]: time="2026-01-17T00:16:02.495044125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:02.631454 containerd[1599]: time="2026-01-17T00:16:02.628708918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jv58m,Uid:7975e72c-51ab-4426-a2e3-4746c66576b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e0638fac473a7d467fb168a2a18db885c764715b92d34beb99f34bebee45844d\"" Jan 17 00:16:02.640046 containerd[1599]: time="2026-01-17T00:16:02.639726549Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:16:02.817029 update_engine[1566]: I20260117 00:16:02.816919 1566 update_attempter.cc:509] Updating boot flags... Jan 17 00:16:02.932072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2922) Jan 17 00:16:03.046214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2927) Jan 17 00:16:03.426207 kubelet[2699]: E0117 00:16:03.425711 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:04.052257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805690490.mount: Deactivated successfully. Jan 17 00:16:04.896247 containerd[1599]: time="2026-01-17T00:16:04.896176621Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:04.898727 containerd[1599]: time="2026-01-17T00:16:04.898415972Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:16:04.899644 containerd[1599]: time="2026-01-17T00:16:04.899597029Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:04.902730 containerd[1599]: time="2026-01-17T00:16:04.902677214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:04.904306 containerd[1599]: time="2026-01-17T00:16:04.904098814Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.264319671s" Jan 17 00:16:04.904306 containerd[1599]: time="2026-01-17T00:16:04.904140826Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:16:04.908768 containerd[1599]: time="2026-01-17T00:16:04.908547139Z" level=info msg="CreateContainer within sandbox \"e0638fac473a7d467fb168a2a18db885c764715b92d34beb99f34bebee45844d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:16:04.927551 containerd[1599]: time="2026-01-17T00:16:04.927459999Z" level=info msg="CreateContainer within sandbox \"e0638fac473a7d467fb168a2a18db885c764715b92d34beb99f34bebee45844d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5027a1e81d39f3c133ffd720eacafe396187ee2afecbe1d96779479cb9afdf2\"" Jan 17 00:16:04.927658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2625577429.mount: Deactivated successfully. Jan 17 00:16:04.931666 containerd[1599]: time="2026-01-17T00:16:04.929894955Z" level=info msg="StartContainer for \"a5027a1e81d39f3c133ffd720eacafe396187ee2afecbe1d96779479cb9afdf2\"" Jan 17 00:16:05.021038 containerd[1599]: time="2026-01-17T00:16:05.020916950Z" level=info msg="StartContainer for \"a5027a1e81d39f3c133ffd720eacafe396187ee2afecbe1d96779479cb9afdf2\" returns successfully" Jan 17 00:16:06.574516 kubelet[2699]: E0117 00:16:06.574404 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:06.610678 kubelet[2699]: I0117 00:16:06.610466 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jv58m" podStartSLOduration=3.342852222 podStartE2EDuration="5.610441712s" podCreationTimestamp="2026-01-17 00:16:01 +0000 UTC" firstStartedPulling="2026-01-17 00:16:02.638553815 +0000 UTC m=+7.418984536" lastFinishedPulling="2026-01-17 00:16:04.906143326 +0000 UTC m=+9.686574026" observedRunningTime="2026-01-17 00:16:05.44490475 +0000 UTC m=+10.225335491" watchObservedRunningTime="2026-01-17 00:16:06.610441712 +0000 UTC m=+11.390872481" Jan 17 00:16:12.049990 sudo[1808]: pam_unix(sudo:session): session closed for user root Jan 17 00:16:12.118582 sshd[1804]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:12.132294 systemd[1]: sshd@6-159.223.199.43:22-4.153.228.146:38406.service: Deactivated successfully. Jan 17 00:16:12.139562 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:16:12.144456 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:16:12.145859 systemd-logind[1564]: Removed session 7. Jan 17 00:16:18.845087 kubelet[2699]: I0117 00:16:18.845022 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hch\" (UniqueName: \"kubernetes.io/projected/b9c7f9c8-c568-4134-8f67-28864edb1054-kube-api-access-m2hch\") pod \"calico-typha-857c7bc8bb-98q5p\" (UID: \"b9c7f9c8-c568-4134-8f67-28864edb1054\") " pod="calico-system/calico-typha-857c7bc8bb-98q5p" Jan 17 00:16:18.845087 kubelet[2699]: I0117 00:16:18.845097 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9c7f9c8-c568-4134-8f67-28864edb1054-tigera-ca-bundle\") pod \"calico-typha-857c7bc8bb-98q5p\" (UID: \"b9c7f9c8-c568-4134-8f67-28864edb1054\") " pod="calico-system/calico-typha-857c7bc8bb-98q5p" Jan 17 00:16:18.845990 kubelet[2699]: I0117 00:16:18.845128 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b9c7f9c8-c568-4134-8f67-28864edb1054-typha-certs\") pod \"calico-typha-857c7bc8bb-98q5p\" (UID: \"b9c7f9c8-c568-4134-8f67-28864edb1054\") " pod="calico-system/calico-typha-857c7bc8bb-98q5p" Jan 17 00:16:18.945741 kubelet[2699]: I0117 00:16:18.945659 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-cni-net-dir\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.945741 kubelet[2699]: I0117 00:16:18.945746 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-lib-modules\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.945741 kubelet[2699]: I0117 00:16:18.945813 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60fe70bf-fe89-4352-addc-bf4afdad905d-tigera-ca-bundle\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946091 kubelet[2699]: I0117 00:16:18.945911 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-var-lib-calico\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946091 kubelet[2699]: I0117 00:16:18.945928 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-xtables-lock\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946091 kubelet[2699]: I0117 00:16:18.945947 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-cni-bin-dir\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946091 kubelet[2699]: I0117 00:16:18.945963 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-cni-log-dir\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946091 kubelet[2699]: I0117 00:16:18.946006 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60fe70bf-fe89-4352-addc-bf4afdad905d-node-certs\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946253 kubelet[2699]: I0117 00:16:18.946034 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-var-run-calico\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946253 kubelet[2699]: I0117 00:16:18.946075 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mc5\" (UniqueName: \"kubernetes.io/projected/60fe70bf-fe89-4352-addc-bf4afdad905d-kube-api-access-m2mc5\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946253 kubelet[2699]: I0117 00:16:18.946111 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-flexvol-driver-host\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.946253 kubelet[2699]: I0117 00:16:18.946131 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60fe70bf-fe89-4352-addc-bf4afdad905d-policysync\") pod \"calico-node-2rdt2\" (UID: \"60fe70bf-fe89-4352-addc-bf4afdad905d\") " pod="calico-system/calico-node-2rdt2" Jan 17 00:16:18.999804 kubelet[2699]: E0117 00:16:18.999268 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:19.060636 kubelet[2699]: E0117 00:16:19.060521 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.060636 kubelet[2699]: W0117 00:16:19.060560 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.062499 kubelet[2699]: E0117 00:16:19.062423 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.062499 kubelet[2699]: W0117 00:16:19.062454 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.067311 kubelet[2699]: E0117 00:16:19.067255 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.068423 kubelet[2699]: E0117 00:16:19.068391 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.127702 kubelet[2699]: E0117 00:16:19.124123 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:19.133800 containerd[1599]: time="2026-01-17T00:16:19.133749226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rdt2,Uid:60fe70bf-fe89-4352-addc-bf4afdad905d,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:19.149205 kubelet[2699]: E0117 00:16:19.149169 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.149205 kubelet[2699]: W0117 00:16:19.149195 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.149513 kubelet[2699]: E0117 00:16:19.149221 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.149513 kubelet[2699]: I0117 00:16:19.149271 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe4a7e29-720a-4e34-a53e-e9187d031f57-varrun\") pod \"csi-node-driver-pvltb\" (UID: \"fe4a7e29-720a-4e34-a53e-e9187d031f57\") " pod="calico-system/csi-node-driver-pvltb" Jan 17 00:16:19.149513 kubelet[2699]: E0117 00:16:19.149480 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.149513 kubelet[2699]: W0117 00:16:19.149499 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.149513 kubelet[2699]: E0117 00:16:19.149510 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.149632 kubelet[2699]: I0117 00:16:19.149524 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4a7e29-720a-4e34-a53e-e9187d031f57-kubelet-dir\") pod \"csi-node-driver-pvltb\" (UID: \"fe4a7e29-720a-4e34-a53e-e9187d031f57\") " pod="calico-system/csi-node-driver-pvltb" Jan 17 00:16:19.149762 kubelet[2699]: E0117 00:16:19.149745 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.149808 kubelet[2699]: W0117 00:16:19.149761 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.149808 kubelet[2699]: E0117 00:16:19.149781 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.149998 kubelet[2699]: I0117 00:16:19.149813 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrz69\" (UniqueName: \"kubernetes.io/projected/fe4a7e29-720a-4e34-a53e-e9187d031f57-kube-api-access-rrz69\") pod \"csi-node-driver-pvltb\" (UID: \"fe4a7e29-720a-4e34-a53e-e9187d031f57\") " pod="calico-system/csi-node-driver-pvltb" Jan 17 00:16:19.150205 kubelet[2699]: E0117 00:16:19.150061 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.150205 kubelet[2699]: W0117 00:16:19.150078 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.150205 kubelet[2699]: E0117 00:16:19.150103 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.150421 kubelet[2699]: E0117 00:16:19.150410 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.150468 kubelet[2699]: W0117 00:16:19.150460 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.150592 kubelet[2699]: E0117 00:16:19.150533 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.151397 kubelet[2699]: E0117 00:16:19.151304 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.151397 kubelet[2699]: W0117 00:16:19.151328 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.151397 kubelet[2699]: E0117 00:16:19.151348 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.151879 kubelet[2699]: E0117 00:16:19.151756 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.151879 kubelet[2699]: W0117 00:16:19.151767 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.151879 kubelet[2699]: E0117 00:16:19.151794 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.152285 kubelet[2699]: E0117 00:16:19.152049 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.152285 kubelet[2699]: W0117 00:16:19.152060 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.152589 kubelet[2699]: E0117 00:16:19.152391 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.152589 kubelet[2699]: I0117 00:16:19.152424 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe4a7e29-720a-4e34-a53e-e9187d031f57-registration-dir\") pod \"csi-node-driver-pvltb\" (UID: \"fe4a7e29-720a-4e34-a53e-e9187d031f57\") " pod="calico-system/csi-node-driver-pvltb" Jan 17 00:16:19.152589 kubelet[2699]: E0117 00:16:19.152487 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.152589 kubelet[2699]: W0117 00:16:19.152494 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.152589 kubelet[2699]: E0117 00:16:19.152511 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.153019 kubelet[2699]: E0117 00:16:19.152905 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.153019 kubelet[2699]: W0117 00:16:19.152917 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.153019 kubelet[2699]: E0117 00:16:19.152928 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.153019 kubelet[2699]: E0117 00:16:19.154308 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.153019 kubelet[2699]: W0117 00:16:19.154321 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.153019 kubelet[2699]: E0117 00:16:19.154345 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.153019 kubelet[2699]: I0117 00:16:19.154369 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe4a7e29-720a-4e34-a53e-e9187d031f57-socket-dir\") pod \"csi-node-driver-pvltb\" (UID: \"fe4a7e29-720a-4e34-a53e-e9187d031f57\") " pod="calico-system/csi-node-driver-pvltb" Jan 17 00:16:19.153019 kubelet[2699]: E0117 00:16:19.154563 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.153019 kubelet[2699]: W0117 00:16:19.154572 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.155173 kubelet[2699]: E0117 00:16:19.154581 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.155173 kubelet[2699]: E0117 00:16:19.154924 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.155173 kubelet[2699]: W0117 00:16:19.154940 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.155173 kubelet[2699]: E0117 00:16:19.154954 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.155386 kubelet[2699]: E0117 00:16:19.155376 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.155503 kubelet[2699]: W0117 00:16:19.155425 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.155503 kubelet[2699]: E0117 00:16:19.155438 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.155650 kubelet[2699]: E0117 00:16:19.155642 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.155693 kubelet[2699]: W0117 00:16:19.155686 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.155741 kubelet[2699]: E0117 00:16:19.155732 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.211986 containerd[1599]: time="2026-01-17T00:16:19.211763156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:19.213612 containerd[1599]: time="2026-01-17T00:16:19.212387072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:19.213612 containerd[1599]: time="2026-01-17T00:16:19.212938599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:19.214462 containerd[1599]: time="2026-01-17T00:16:19.214311855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:19.269105 kubelet[2699]: E0117 00:16:19.267293 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:19.278616 kubelet[2699]: E0117 00:16:19.278501 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.278616 kubelet[2699]: W0117 00:16:19.278557 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.278616 kubelet[2699]: E0117 00:16:19.278583 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.279516 kubelet[2699]: E0117 00:16:19.279224 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.279516 kubelet[2699]: W0117 00:16:19.279295 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.279516 kubelet[2699]: E0117 00:16:19.279375 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.280491 kubelet[2699]: E0117 00:16:19.279877 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.280491 kubelet[2699]: W0117 00:16:19.279895 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.280491 kubelet[2699]: E0117 00:16:19.279909 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.280491 kubelet[2699]: E0117 00:16:19.280436 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.280491 kubelet[2699]: W0117 00:16:19.280477 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.280491 kubelet[2699]: E0117 00:16:19.280498 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.284857 kubelet[2699]: E0117 00:16:19.284494 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.284857 kubelet[2699]: W0117 00:16:19.284523 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.284857 kubelet[2699]: E0117 00:16:19.284577 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.292026 kubelet[2699]: E0117 00:16:19.290182 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.292026 kubelet[2699]: W0117 00:16:19.290218 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.292026 kubelet[2699]: E0117 00:16:19.290248 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.292026 kubelet[2699]: E0117 00:16:19.290439 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.292026 kubelet[2699]: W0117 00:16:19.290448 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.292026 kubelet[2699]: E0117 00:16:19.290460 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.292302 kubelet[2699]: E0117 00:16:19.292115 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.292302 kubelet[2699]: W0117 00:16:19.292142 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.292302 kubelet[2699]: E0117 00:16:19.292169 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.301101 kubelet[2699]: E0117 00:16:19.297452 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.301101 kubelet[2699]: W0117 00:16:19.297489 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.301101 kubelet[2699]: E0117 00:16:19.297515 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.301101 kubelet[2699]: E0117 00:16:19.297884 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.301101 kubelet[2699]: W0117 00:16:19.297900 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.301101 kubelet[2699]: E0117 00:16:19.297930 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.301101 kubelet[2699]: E0117 00:16:19.298172 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.301101 kubelet[2699]: W0117 00:16:19.298185 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.301101 kubelet[2699]: E0117 00:16:19.298248 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.301101 kubelet[2699]: E0117 00:16:19.298722 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.302314 kubelet[2699]: W0117 00:16:19.298740 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.302314 kubelet[2699]: E0117 00:16:19.300263 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.302314 kubelet[2699]: E0117 00:16:19.300530 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.302314 kubelet[2699]: W0117 00:16:19.300555 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.302314 kubelet[2699]: E0117 00:16:19.300573 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.306971 kubelet[2699]: E0117 00:16:19.303510 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.306971 kubelet[2699]: W0117 00:16:19.303568 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.306971 kubelet[2699]: E0117 00:16:19.304154 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.306971 kubelet[2699]: E0117 00:16:19.304418 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.306971 kubelet[2699]: W0117 00:16:19.304430 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.306971 kubelet[2699]: E0117 00:16:19.304444 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.306971 kubelet[2699]: E0117 00:16:19.304591 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.306971 kubelet[2699]: W0117 00:16:19.304598 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.306971 kubelet[2699]: E0117 00:16:19.304606 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.306971 kubelet[2699]: E0117 00:16:19.304867 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307324 kubelet[2699]: W0117 00:16:19.304880 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307324 kubelet[2699]: E0117 00:16:19.304895 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307324 kubelet[2699]: E0117 00:16:19.305108 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307324 kubelet[2699]: W0117 00:16:19.305117 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307324 kubelet[2699]: E0117 00:16:19.305131 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307324 kubelet[2699]: E0117 00:16:19.305310 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307324 kubelet[2699]: W0117 00:16:19.305325 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307324 kubelet[2699]: E0117 00:16:19.305335 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307324 kubelet[2699]: E0117 00:16:19.305472 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307324 kubelet[2699]: W0117 00:16:19.305483 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307553 kubelet[2699]: E0117 00:16:19.305491 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307553 kubelet[2699]: E0117 00:16:19.305723 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307553 kubelet[2699]: W0117 00:16:19.305737 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307553 kubelet[2699]: E0117 00:16:19.305750 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307553 kubelet[2699]: E0117 00:16:19.306498 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307553 kubelet[2699]: W0117 00:16:19.306509 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307553 kubelet[2699]: E0117 00:16:19.306546 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307553 kubelet[2699]: E0117 00:16:19.306738 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307553 kubelet[2699]: W0117 00:16:19.306747 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307553 kubelet[2699]: E0117 00:16:19.306760 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307767 kubelet[2699]: E0117 00:16:19.307039 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307767 kubelet[2699]: W0117 00:16:19.307051 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307767 kubelet[2699]: E0117 00:16:19.307086 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.307767 kubelet[2699]: E0117 00:16:19.307395 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.307767 kubelet[2699]: W0117 00:16:19.307426 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.307767 kubelet[2699]: E0117 00:16:19.307442 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.311277 containerd[1599]: time="2026-01-17T00:16:19.310466714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-857c7bc8bb-98q5p,Uid:b9c7f9c8-c568-4134-8f67-28864edb1054,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:19.358760 kubelet[2699]: E0117 00:16:19.358101 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:19.358760 kubelet[2699]: W0117 00:16:19.358127 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:19.358760 kubelet[2699]: E0117 00:16:19.358152 2699 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:19.394536 containerd[1599]: time="2026-01-17T00:16:19.394237633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rdt2,Uid:60fe70bf-fe89-4352-addc-bf4afdad905d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d5dd44f5e7f61e747f8cab6f5931343758373dedbf9d8d35f4704ad3e5d0e88\"" Jan 17 00:16:19.397866 kubelet[2699]: E0117 00:16:19.397143 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:19.399106 containerd[1599]: time="2026-01-17T00:16:19.398709357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:16:19.434086 containerd[1599]: time="2026-01-17T00:16:19.433958093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:19.435089 containerd[1599]: time="2026-01-17T00:16:19.434944579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:19.435591 containerd[1599]: time="2026-01-17T00:16:19.435243850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:19.435591 containerd[1599]: time="2026-01-17T00:16:19.435370437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:19.579357 containerd[1599]: time="2026-01-17T00:16:19.577697310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-857c7bc8bb-98q5p,Uid:b9c7f9c8-c568-4134-8f67-28864edb1054,Namespace:calico-system,Attempt:0,} returns sandbox id \"00e3c5f2c834bbe03bcc10e7de27a1f62fe8381202d106a5cb8ec679a5e50d98\"" Jan 17 00:16:19.582108 kubelet[2699]: E0117 00:16:19.582060 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:20.801500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457973279.mount: Deactivated successfully. Jan 17 00:16:20.947873 containerd[1599]: time="2026-01-17T00:16:20.946987300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:20.949209 containerd[1599]: time="2026-01-17T00:16:20.949155314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 17 00:16:20.950127 containerd[1599]: time="2026-01-17T00:16:20.950100954Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:20.953044 containerd[1599]: time="2026-01-17T00:16:20.953002845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:20.953602 containerd[1599]: time="2026-01-17T00:16:20.953563355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.554798422s" Jan 17 00:16:20.953658 containerd[1599]: time="2026-01-17T00:16:20.953611245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:16:20.959114 containerd[1599]: time="2026-01-17T00:16:20.958510247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:16:20.961215 containerd[1599]: time="2026-01-17T00:16:20.961177872Z" level=info msg="CreateContainer within sandbox \"6d5dd44f5e7f61e747f8cab6f5931343758373dedbf9d8d35f4704ad3e5d0e88\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:16:21.010381 containerd[1599]: time="2026-01-17T00:16:21.009607496Z" level=info msg="CreateContainer within sandbox \"6d5dd44f5e7f61e747f8cab6f5931343758373dedbf9d8d35f4704ad3e5d0e88\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"539297ec0adc500f383784ea7d03032c0e9bc884641af404bf16ba911ea97a70\"" Jan 17 00:16:21.012606 containerd[1599]: time="2026-01-17T00:16:21.010802007Z" level=info msg="StartContainer for \"539297ec0adc500f383784ea7d03032c0e9bc884641af404bf16ba911ea97a70\"" Jan 17 00:16:21.101808 containerd[1599]: time="2026-01-17T00:16:21.101601254Z" level=info msg="StartContainer for \"539297ec0adc500f383784ea7d03032c0e9bc884641af404bf16ba911ea97a70\" returns successfully" Jan 17 00:16:21.146727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-539297ec0adc500f383784ea7d03032c0e9bc884641af404bf16ba911ea97a70-rootfs.mount: Deactivated successfully. Jan 17 00:16:21.155334 containerd[1599]: time="2026-01-17T00:16:21.155254637Z" level=info msg="shim disconnected" id=539297ec0adc500f383784ea7d03032c0e9bc884641af404bf16ba911ea97a70 namespace=k8s.io Jan 17 00:16:21.155334 containerd[1599]: time="2026-01-17T00:16:21.155326347Z" level=warning msg="cleaning up after shim disconnected" id=539297ec0adc500f383784ea7d03032c0e9bc884641af404bf16ba911ea97a70 namespace=k8s.io Jan 17 00:16:21.155334 containerd[1599]: time="2026-01-17T00:16:21.155335451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:16:21.365856 kubelet[2699]: E0117 00:16:21.365215 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:21.505344 kubelet[2699]: E0117 00:16:21.504976 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:23.370770 kubelet[2699]: E0117 00:16:23.370728 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:23.511584 containerd[1599]: time="2026-01-17T00:16:23.511005620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:23.513309 containerd[1599]: time="2026-01-17T00:16:23.513129089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 17 00:16:23.513731 containerd[1599]: time="2026-01-17T00:16:23.513671347Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:23.515686 containerd[1599]: time="2026-01-17T00:16:23.515646795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:23.516378 containerd[1599]: time="2026-01-17T00:16:23.516333410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.557781791s" Jan 17 00:16:23.516378 containerd[1599]: time="2026-01-17T00:16:23.516371064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:16:23.517852 containerd[1599]: time="2026-01-17T00:16:23.517798921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:16:23.542566 containerd[1599]: time="2026-01-17T00:16:23.542523583Z" level=info msg="CreateContainer within sandbox \"00e3c5f2c834bbe03bcc10e7de27a1f62fe8381202d106a5cb8ec679a5e50d98\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:16:23.560100 containerd[1599]: time="2026-01-17T00:16:23.560024047Z" level=info msg="CreateContainer within sandbox \"00e3c5f2c834bbe03bcc10e7de27a1f62fe8381202d106a5cb8ec679a5e50d98\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"45aad346e501450c936f0b42b093d291942d5ab12bb4b9c19fe8db2dba72da6f\"" Jan 17 00:16:23.560969 containerd[1599]: time="2026-01-17T00:16:23.560881961Z" level=info msg="StartContainer for \"45aad346e501450c936f0b42b093d291942d5ab12bb4b9c19fe8db2dba72da6f\"" Jan 17 00:16:23.677973 containerd[1599]: time="2026-01-17T00:16:23.675313665Z" level=info msg="StartContainer for \"45aad346e501450c936f0b42b093d291942d5ab12bb4b9c19fe8db2dba72da6f\" returns successfully" Jan 17 00:16:24.521865 kubelet[2699]: E0117 00:16:24.521780 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:24.540503 kubelet[2699]: I0117 00:16:24.540423 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-857c7bc8bb-98q5p" podStartSLOduration=2.608271236 podStartE2EDuration="6.540402687s" podCreationTimestamp="2026-01-17 00:16:18 +0000 UTC" firstStartedPulling="2026-01-17 00:16:19.585542036 +0000 UTC m=+24.365972753" lastFinishedPulling="2026-01-17 00:16:23.51767349 +0000 UTC m=+28.298104204" observedRunningTime="2026-01-17 00:16:24.539092986 +0000 UTC m=+29.319523709" watchObservedRunningTime="2026-01-17 00:16:24.540402687 +0000 UTC m=+29.320833431" Jan 17 00:16:25.367766 kubelet[2699]: E0117 00:16:25.366415 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:25.524982 kubelet[2699]: I0117 00:16:25.524940 2699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:16:25.526209 kubelet[2699]: E0117 00:16:25.525344 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:27.366195 kubelet[2699]: E0117 00:16:27.365915 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:27.816565 containerd[1599]: time="2026-01-17T00:16:27.816249056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:27.818125 containerd[1599]: time="2026-01-17T00:16:27.817727426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:16:27.819577 containerd[1599]: time="2026-01-17T00:16:27.819250446Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:27.823686 containerd[1599]: time="2026-01-17T00:16:27.823622197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:27.825781 containerd[1599]: time="2026-01-17T00:16:27.825131613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.307292676s" Jan 17 00:16:27.825781 containerd[1599]: time="2026-01-17T00:16:27.825196569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:16:27.832030 containerd[1599]: time="2026-01-17T00:16:27.831972034Z" level=info msg="CreateContainer within sandbox \"6d5dd44f5e7f61e747f8cab6f5931343758373dedbf9d8d35f4704ad3e5d0e88\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:16:27.855186 containerd[1599]: time="2026-01-17T00:16:27.855091930Z" level=info msg="CreateContainer within sandbox \"6d5dd44f5e7f61e747f8cab6f5931343758373dedbf9d8d35f4704ad3e5d0e88\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef766552ac8ab658cead2ac4cf338e7c17164b1a6bba322eee283a474cef1f42\"" Jan 17 00:16:27.857171 containerd[1599]: time="2026-01-17T00:16:27.857119147Z" level=info msg="StartContainer for \"ef766552ac8ab658cead2ac4cf338e7c17164b1a6bba322eee283a474cef1f42\"" Jan 17 00:16:27.975516 containerd[1599]: time="2026-01-17T00:16:27.975455991Z" level=info msg="StartContainer for \"ef766552ac8ab658cead2ac4cf338e7c17164b1a6bba322eee283a474cef1f42\" returns successfully" Jan 17 00:16:28.544087 kubelet[2699]: E0117 00:16:28.542954 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:28.764061 kubelet[2699]: I0117 00:16:28.763945 2699 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:16:28.775724 containerd[1599]: time="2026-01-17T00:16:28.774590432Z" level=info msg="shim disconnected" id=ef766552ac8ab658cead2ac4cf338e7c17164b1a6bba322eee283a474cef1f42 namespace=k8s.io Jan 17 00:16:28.775724 containerd[1599]: time="2026-01-17T00:16:28.774682905Z" level=warning msg="cleaning up after shim disconnected" id=ef766552ac8ab658cead2ac4cf338e7c17164b1a6bba322eee283a474cef1f42 namespace=k8s.io Jan 17 00:16:28.775724 containerd[1599]: time="2026-01-17T00:16:28.774694466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:16:28.775887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef766552ac8ab658cead2ac4cf338e7c17164b1a6bba322eee283a474cef1f42-rootfs.mount: Deactivated successfully. Jan 17 00:16:28.859979 kubelet[2699]: W0117 00:16:28.857043 2699 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081.3.6-n-cccb0c3e85" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object Jan 17 00:16:28.859979 kubelet[2699]: E0117 00:16:28.857103 2699 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081.3.6-n-cccb0c3e85\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object" logger="UnhandledError" Jan 17 00:16:28.859979 kubelet[2699]: W0117 00:16:28.857175 2699 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081.3.6-n-cccb0c3e85" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object Jan 17 00:16:28.859979 kubelet[2699]: E0117 00:16:28.857198 2699 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.6-n-cccb0c3e85\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object" logger="UnhandledError" Jan 17 00:16:28.874026 kubelet[2699]: I0117 00:16:28.873028 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec675aa1-75e7-4358-af19-bc10fabdfd85-config-volume\") pod \"coredns-668d6bf9bc-5nql5\" (UID: \"ec675aa1-75e7-4358-af19-bc10fabdfd85\") " pod="kube-system/coredns-668d6bf9bc-5nql5" Jan 17 00:16:28.874026 kubelet[2699]: I0117 00:16:28.873093 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8nb\" (UniqueName: \"kubernetes.io/projected/b0c84ef6-254a-45d6-83f8-3efb7d2d1036-kube-api-access-fz8nb\") pod \"coredns-668d6bf9bc-59mdc\" (UID: \"b0c84ef6-254a-45d6-83f8-3efb7d2d1036\") " pod="kube-system/coredns-668d6bf9bc-59mdc" Jan 17 00:16:28.874026 kubelet[2699]: I0117 00:16:28.873133 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0c84ef6-254a-45d6-83f8-3efb7d2d1036-config-volume\") pod \"coredns-668d6bf9bc-59mdc\" (UID: \"b0c84ef6-254a-45d6-83f8-3efb7d2d1036\") " pod="kube-system/coredns-668d6bf9bc-59mdc" Jan 17 00:16:28.874026 kubelet[2699]: I0117 00:16:28.873172 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrd9\" (UniqueName: \"kubernetes.io/projected/ec675aa1-75e7-4358-af19-bc10fabdfd85-kube-api-access-zqrd9\") pod \"coredns-668d6bf9bc-5nql5\" (UID: \"ec675aa1-75e7-4358-af19-bc10fabdfd85\") " pod="kube-system/coredns-668d6bf9bc-5nql5" Jan 17 00:16:28.874026 kubelet[2699]: I0117 00:16:28.873199 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-backend-key-pair\") pod \"whisker-f9cc75987-n26vh\" (UID: \"9a74e7d7-2baa-4c90-956f-f975e2acce52\") " pod="calico-system/whisker-f9cc75987-n26vh" Jan 17 00:16:28.874323 kubelet[2699]: I0117 00:16:28.873231 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb5hf\" (UniqueName: \"kubernetes.io/projected/9a74e7d7-2baa-4c90-956f-f975e2acce52-kube-api-access-jb5hf\") pod \"whisker-f9cc75987-n26vh\" (UID: \"9a74e7d7-2baa-4c90-956f-f975e2acce52\") " pod="calico-system/whisker-f9cc75987-n26vh" Jan 17 00:16:28.874323 kubelet[2699]: I0117 00:16:28.873369 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-ca-bundle\") pod \"whisker-f9cc75987-n26vh\" (UID: \"9a74e7d7-2baa-4c90-956f-f975e2acce52\") " pod="calico-system/whisker-f9cc75987-n26vh" Jan 17 00:16:28.900287 kubelet[2699]: W0117 00:16:28.897148 2699 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.6-n-cccb0c3e85" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object Jan 17 00:16:28.900287 kubelet[2699]: E0117 00:16:28.897400 2699 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.6-n-cccb0c3e85\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object" logger="UnhandledError" Jan 17 00:16:28.900287 kubelet[2699]: W0117 00:16:28.897781 2699 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-n-cccb0c3e85" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object Jan 17 00:16:28.900287 kubelet[2699]: E0117 00:16:28.897809 2699 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-n-cccb0c3e85\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-cccb0c3e85' and this object" logger="UnhandledError" Jan 17 00:16:28.979877 kubelet[2699]: I0117 00:16:28.974768 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmst\" (UniqueName: \"kubernetes.io/projected/cd6dbe24-c430-428d-92d9-91f581859d83-kube-api-access-mbmst\") pod \"calico-apiserver-6c6cc8d58d-g2rj5\" (UID: \"cd6dbe24-c430-428d-92d9-91f581859d83\") " pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" Jan 17 00:16:28.979877 kubelet[2699]: I0117 00:16:28.974974 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96b65c17-4b2e-4680-86fb-3425314d6580-goldmane-ca-bundle\") pod \"goldmane-666569f655-cd6lg\" (UID: \"96b65c17-4b2e-4680-86fb-3425314d6580\") " pod="calico-system/goldmane-666569f655-cd6lg" Jan 17 00:16:28.979877 kubelet[2699]: I0117 00:16:28.975024 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b65c17-4b2e-4680-86fb-3425314d6580-config\") pod \"goldmane-666569f655-cd6lg\" (UID: \"96b65c17-4b2e-4680-86fb-3425314d6580\") " pod="calico-system/goldmane-666569f655-cd6lg" Jan 17 00:16:28.979877 kubelet[2699]: I0117 00:16:28.975054 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd6dbe24-c430-428d-92d9-91f581859d83-calico-apiserver-certs\") pod \"calico-apiserver-6c6cc8d58d-g2rj5\" (UID: \"cd6dbe24-c430-428d-92d9-91f581859d83\") " pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" Jan 17 00:16:28.979877 kubelet[2699]: I0117 00:16:28.975126 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9cvq\" (UniqueName: \"kubernetes.io/projected/96b65c17-4b2e-4680-86fb-3425314d6580-kube-api-access-p9cvq\") pod \"goldmane-666569f655-cd6lg\" (UID: \"96b65c17-4b2e-4680-86fb-3425314d6580\") " pod="calico-system/goldmane-666569f655-cd6lg" Jan 17 00:16:28.980613 kubelet[2699]: I0117 00:16:28.975175 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43a11e4d-d5b2-4905-990b-145b7f453524-calico-apiserver-certs\") pod \"calico-apiserver-6c6cc8d58d-8tc5j\" (UID: \"43a11e4d-d5b2-4905-990b-145b7f453524\") " pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" Jan 17 00:16:28.980613 kubelet[2699]: I0117 00:16:28.975220 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/96b65c17-4b2e-4680-86fb-3425314d6580-goldmane-key-pair\") pod \"goldmane-666569f655-cd6lg\" (UID: \"96b65c17-4b2e-4680-86fb-3425314d6580\") " pod="calico-system/goldmane-666569f655-cd6lg" Jan 17 00:16:28.980613 kubelet[2699]: I0117 00:16:28.975244 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xccxs\" (UniqueName: \"kubernetes.io/projected/7b8b1bac-c0de-45cb-b647-eb4712722238-kube-api-access-xccxs\") pod \"calico-kube-controllers-7d4ffb8bcd-m826d\" (UID: \"7b8b1bac-c0de-45cb-b647-eb4712722238\") " pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" Jan 17 00:16:28.980613 kubelet[2699]: I0117 00:16:28.975282 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzb95\" (UniqueName: \"kubernetes.io/projected/43a11e4d-d5b2-4905-990b-145b7f453524-kube-api-access-vzb95\") pod \"calico-apiserver-6c6cc8d58d-8tc5j\" (UID: \"43a11e4d-d5b2-4905-990b-145b7f453524\") " pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" Jan 17 00:16:28.980613 kubelet[2699]: I0117 00:16:28.975314 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b8b1bac-c0de-45cb-b647-eb4712722238-tigera-ca-bundle\") pod \"calico-kube-controllers-7d4ffb8bcd-m826d\" (UID: \"7b8b1bac-c0de-45cb-b647-eb4712722238\") " pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" Jan 17 00:16:29.135328 kubelet[2699]: E0117 00:16:29.135168 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:29.136664 containerd[1599]: time="2026-01-17T00:16:29.136212823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59mdc,Uid:b0c84ef6-254a-45d6-83f8-3efb7d2d1036,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:29.157864 kubelet[2699]: E0117 00:16:29.154734 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:29.158040 containerd[1599]: time="2026-01-17T00:16:29.155692860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nql5,Uid:ec675aa1-75e7-4358-af19-bc10fabdfd85,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:29.177865 containerd[1599]: time="2026-01-17T00:16:29.177293354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4ffb8bcd-m826d,Uid:7b8b1bac-c0de-45cb-b647-eb4712722238,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:29.236802 containerd[1599]: time="2026-01-17T00:16:29.236715374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cd6lg,Uid:96b65c17-4b2e-4680-86fb-3425314d6580,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:29.379591 containerd[1599]: time="2026-01-17T00:16:29.379532300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvltb,Uid:fe4a7e29-720a-4e34-a53e-e9187d031f57,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:29.554993 containerd[1599]: time="2026-01-17T00:16:29.554722300Z" level=error msg="Failed to destroy network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.555798 kubelet[2699]: E0117 00:16:29.555307 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:29.563732 containerd[1599]: time="2026-01-17T00:16:29.562553039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:16:29.569422 containerd[1599]: time="2026-01-17T00:16:29.569335747Z" level=error msg="encountered an error cleaning up failed sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.586337 containerd[1599]: time="2026-01-17T00:16:29.585948987Z" level=error msg="Failed to destroy network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.589357 containerd[1599]: time="2026-01-17T00:16:29.589305429Z" level=error msg="encountered an error cleaning up failed sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.589599 containerd[1599]: time="2026-01-17T00:16:29.589575318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59mdc,Uid:b0c84ef6-254a-45d6-83f8-3efb7d2d1036,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.601894 kubelet[2699]: E0117 00:16:29.601842 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.602046 kubelet[2699]: E0117 00:16:29.601934 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-59mdc" Jan 17 00:16:29.602046 kubelet[2699]: E0117 00:16:29.601961 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-59mdc" Jan 17 00:16:29.602995 kubelet[2699]: E0117 00:16:29.602031 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-59mdc_kube-system(b0c84ef6-254a-45d6-83f8-3efb7d2d1036)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-59mdc_kube-system(b0c84ef6-254a-45d6-83f8-3efb7d2d1036)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-59mdc" podUID="b0c84ef6-254a-45d6-83f8-3efb7d2d1036" Jan 17 00:16:29.627405 containerd[1599]: time="2026-01-17T00:16:29.627316377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4ffb8bcd-m826d,Uid:7b8b1bac-c0de-45cb-b647-eb4712722238,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.628417 kubelet[2699]: E0117 00:16:29.627987 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.628417 kubelet[2699]: E0117 00:16:29.628062 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" Jan 17 00:16:29.628417 kubelet[2699]: E0117 00:16:29.628088 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" Jan 17 00:16:29.628654 kubelet[2699]: E0117 00:16:29.628143 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d4ffb8bcd-m826d_calico-system(7b8b1bac-c0de-45cb-b647-eb4712722238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d4ffb8bcd-m826d_calico-system(7b8b1bac-c0de-45cb-b647-eb4712722238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:16:29.633258 containerd[1599]: time="2026-01-17T00:16:29.633209410Z" level=error msg="Failed to destroy network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.633945 containerd[1599]: time="2026-01-17T00:16:29.633733719Z" level=error msg="encountered an error cleaning up failed sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.633945 containerd[1599]: time="2026-01-17T00:16:29.633789759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nql5,Uid:ec675aa1-75e7-4358-af19-bc10fabdfd85,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.635544 kubelet[2699]: E0117 00:16:29.634386 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.635544 kubelet[2699]: E0117 00:16:29.634684 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nql5" Jan 17 00:16:29.635544 kubelet[2699]: E0117 00:16:29.634725 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nql5" Jan 17 00:16:29.635720 kubelet[2699]: E0117 00:16:29.635040 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5nql5_kube-system(ec675aa1-75e7-4358-af19-bc10fabdfd85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5nql5_kube-system(ec675aa1-75e7-4358-af19-bc10fabdfd85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5nql5" podUID="ec675aa1-75e7-4358-af19-bc10fabdfd85" Jan 17 00:16:29.643161 containerd[1599]: time="2026-01-17T00:16:29.642475219Z" level=error msg="Failed to destroy network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.643161 containerd[1599]: time="2026-01-17T00:16:29.643048987Z" level=error msg="encountered an error cleaning up failed sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.643161 containerd[1599]: time="2026-01-17T00:16:29.643104991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvltb,Uid:fe4a7e29-720a-4e34-a53e-e9187d031f57,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.643768 kubelet[2699]: E0117 00:16:29.643686 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.643919 kubelet[2699]: E0117 00:16:29.643785 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pvltb" Jan 17 00:16:29.643919 kubelet[2699]: E0117 00:16:29.643810 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pvltb" Jan 17 00:16:29.644011 kubelet[2699]: E0117 00:16:29.643919 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:29.657687 containerd[1599]: time="2026-01-17T00:16:29.657632430Z" level=error msg="Failed to destroy network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.658516 containerd[1599]: time="2026-01-17T00:16:29.658276354Z" level=error msg="encountered an error cleaning up failed sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.658516 containerd[1599]: time="2026-01-17T00:16:29.658345444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cd6lg,Uid:96b65c17-4b2e-4680-86fb-3425314d6580,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.658797 kubelet[2699]: E0117 00:16:29.658707 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:29.658797 kubelet[2699]: E0117 00:16:29.658785 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cd6lg" Jan 17 00:16:29.659048 kubelet[2699]: E0117 00:16:29.658813 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cd6lg" Jan 17 00:16:29.659048 kubelet[2699]: E0117 00:16:29.658951 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cd6lg_calico-system(96b65c17-4b2e-4680-86fb-3425314d6580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cd6lg_calico-system(96b65c17-4b2e-4680-86fb-3425314d6580)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:16:29.977003 kubelet[2699]: E0117 00:16:29.976811 2699 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:29.977003 kubelet[2699]: E0117 00:16:29.976985 2699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-ca-bundle podName:9a74e7d7-2baa-4c90-956f-f975e2acce52 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:30.476955131 +0000 UTC m=+35.257385846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-ca-bundle") pod "whisker-f9cc75987-n26vh" (UID: "9a74e7d7-2baa-4c90-956f-f975e2acce52") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:30.030695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359-shm.mount: Deactivated successfully. Jan 17 00:16:30.078754 kubelet[2699]: E0117 00:16:30.078527 2699 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 00:16:30.078754 kubelet[2699]: E0117 00:16:30.078567 2699 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 00:16:30.078754 kubelet[2699]: E0117 00:16:30.078652 2699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43a11e4d-d5b2-4905-990b-145b7f453524-calico-apiserver-certs podName:43a11e4d-d5b2-4905-990b-145b7f453524 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:30.5786289 +0000 UTC m=+35.359059599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/43a11e4d-d5b2-4905-990b-145b7f453524-calico-apiserver-certs") pod "calico-apiserver-6c6cc8d58d-8tc5j" (UID: "43a11e4d-d5b2-4905-990b-145b7f453524") : failed to sync secret cache: timed out waiting for the condition Jan 17 00:16:30.078754 kubelet[2699]: E0117 00:16:30.078688 2699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd6dbe24-c430-428d-92d9-91f581859d83-calico-apiserver-certs podName:cd6dbe24-c430-428d-92d9-91f581859d83 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:30.578659883 +0000 UTC m=+35.359090618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/cd6dbe24-c430-428d-92d9-91f581859d83-calico-apiserver-certs") pod "calico-apiserver-6c6cc8d58d-g2rj5" (UID: "cd6dbe24-c430-428d-92d9-91f581859d83") : failed to sync secret cache: timed out waiting for the condition Jan 17 00:16:30.101893 kubelet[2699]: E0117 00:16:30.101514 2699 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:30.101893 kubelet[2699]: E0117 00:16:30.101590 2699 projected.go:194] Error preparing data for projected volume kube-api-access-vzb95 for pod calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:30.101893 kubelet[2699]: E0117 00:16:30.101698 2699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43a11e4d-d5b2-4905-990b-145b7f453524-kube-api-access-vzb95 podName:43a11e4d-d5b2-4905-990b-145b7f453524 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:30.601669062 +0000 UTC m=+35.382099795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vzb95" (UniqueName: "kubernetes.io/projected/43a11e4d-d5b2-4905-990b-145b7f453524-kube-api-access-vzb95") pod "calico-apiserver-6c6cc8d58d-8tc5j" (UID: "43a11e4d-d5b2-4905-990b-145b7f453524") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:30.111757 kubelet[2699]: E0117 00:16:30.111669 2699 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:30.111757 kubelet[2699]: E0117 00:16:30.111726 2699 projected.go:194] Error preparing data for projected volume kube-api-access-mbmst for pod calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:30.112055 kubelet[2699]: E0117 00:16:30.111797 2699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd6dbe24-c430-428d-92d9-91f581859d83-kube-api-access-mbmst podName:cd6dbe24-c430-428d-92d9-91f581859d83 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:30.611778299 +0000 UTC m=+35.392209013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mbmst" (UniqueName: "kubernetes.io/projected/cd6dbe24-c430-428d-92d9-91f581859d83-kube-api-access-mbmst") pod "calico-apiserver-6c6cc8d58d-g2rj5" (UID: "cd6dbe24-c430-428d-92d9-91f581859d83") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:30.556914 kubelet[2699]: I0117 00:16:30.556166 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:30.563196 kubelet[2699]: I0117 00:16:30.563162 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:30.565424 containerd[1599]: time="2026-01-17T00:16:30.565150890Z" level=info msg="StopPodSandbox for \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\"" Jan 17 00:16:30.570551 containerd[1599]: time="2026-01-17T00:16:30.569535361Z" level=info msg="StopPodSandbox for \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\"" Jan 17 00:16:30.570551 containerd[1599]: time="2026-01-17T00:16:30.570139175Z" level=info msg="Ensure that sandbox 6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726 in task-service has been cleanup successfully" Jan 17 00:16:30.570985 containerd[1599]: time="2026-01-17T00:16:30.570770816Z" level=info msg="Ensure that sandbox 0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2 in task-service has been cleanup successfully" Jan 17 00:16:30.573743 kubelet[2699]: I0117 00:16:30.573596 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:30.577786 containerd[1599]: time="2026-01-17T00:16:30.576568917Z" level=info msg="StopPodSandbox for \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\"" Jan 17 00:16:30.578578 containerd[1599]: time="2026-01-17T00:16:30.578153512Z" level=info msg="Ensure that sandbox 5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492 in task-service has been cleanup successfully" Jan 17 00:16:30.581117 kubelet[2699]: I0117 00:16:30.580987 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:30.583326 containerd[1599]: time="2026-01-17T00:16:30.583282596Z" level=info msg="StopPodSandbox for \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\"" Jan 17 00:16:30.584384 containerd[1599]: time="2026-01-17T00:16:30.584039902Z" level=info msg="Ensure that sandbox 874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359 in task-service has been cleanup successfully" Jan 17 00:16:30.593424 kubelet[2699]: I0117 00:16:30.593384 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:30.598299 containerd[1599]: time="2026-01-17T00:16:30.597435326Z" level=info msg="StopPodSandbox for \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\"" Jan 17 00:16:30.600435 containerd[1599]: time="2026-01-17T00:16:30.599924235Z" level=info msg="Ensure that sandbox 85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2 in task-service has been cleanup successfully" Jan 17 00:16:30.650151 containerd[1599]: time="2026-01-17T00:16:30.649970046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f9cc75987-n26vh,Uid:9a74e7d7-2baa-4c90-956f-f975e2acce52,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:30.711417 containerd[1599]: time="2026-01-17T00:16:30.711031373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-g2rj5,Uid:cd6dbe24-c430-428d-92d9-91f581859d83,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:16:30.743012 containerd[1599]: time="2026-01-17T00:16:30.742950411Z" level=error msg="StopPodSandbox for \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\" failed" error="failed to destroy network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.750531 containerd[1599]: time="2026-01-17T00:16:30.743321535Z" level=error msg="StopPodSandbox for \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\" failed" error="failed to destroy network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.751292 kubelet[2699]: E0117 00:16:30.751020 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:30.751292 kubelet[2699]: E0117 00:16:30.751114 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2"} Jan 17 00:16:30.751292 kubelet[2699]: E0117 00:16:30.751196 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96b65c17-4b2e-4680-86fb-3425314d6580\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:30.751292 kubelet[2699]: E0117 00:16:30.751226 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96b65c17-4b2e-4680-86fb-3425314d6580\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:16:30.752188 kubelet[2699]: E0117 00:16:30.751614 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:30.752188 kubelet[2699]: E0117 00:16:30.751646 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359"} Jan 17 00:16:30.752188 kubelet[2699]: E0117 00:16:30.751685 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0c84ef6-254a-45d6-83f8-3efb7d2d1036\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:30.752188 kubelet[2699]: E0117 00:16:30.751719 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0c84ef6-254a-45d6-83f8-3efb7d2d1036\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-59mdc" podUID="b0c84ef6-254a-45d6-83f8-3efb7d2d1036" Jan 17 00:16:30.775139 containerd[1599]: time="2026-01-17T00:16:30.775070547Z" level=error msg="StopPodSandbox for \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\" failed" error="failed to destroy network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.775659 kubelet[2699]: E0117 00:16:30.775620 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:30.776091 kubelet[2699]: E0117 00:16:30.775889 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726"} Jan 17 00:16:30.776091 kubelet[2699]: E0117 00:16:30.776058 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe4a7e29-720a-4e34-a53e-e9187d031f57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:30.776954 kubelet[2699]: E0117 00:16:30.776813 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe4a7e29-720a-4e34-a53e-e9187d031f57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:30.781114 containerd[1599]: time="2026-01-17T00:16:30.781052487Z" level=error msg="StopPodSandbox for \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\" failed" error="failed to destroy network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.781681 kubelet[2699]: E0117 00:16:30.781506 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:30.781681 kubelet[2699]: E0117 00:16:30.781565 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492"} Jan 17 00:16:30.781681 kubelet[2699]: E0117 00:16:30.781599 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec675aa1-75e7-4358-af19-bc10fabdfd85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:30.781681 kubelet[2699]: E0117 00:16:30.781625 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec675aa1-75e7-4358-af19-bc10fabdfd85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5nql5" podUID="ec675aa1-75e7-4358-af19-bc10fabdfd85" Jan 17 00:16:30.787546 containerd[1599]: time="2026-01-17T00:16:30.786724715Z" level=error msg="StopPodSandbox for \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\" failed" error="failed to destroy network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.787693 kubelet[2699]: E0117 00:16:30.787333 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:30.788165 kubelet[2699]: E0117 00:16:30.787512 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2"} Jan 17 00:16:30.788165 kubelet[2699]: E0117 00:16:30.787804 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b8b1bac-c0de-45cb-b647-eb4712722238\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:30.788165 kubelet[2699]: E0117 00:16:30.787890 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b8b1bac-c0de-45cb-b647-eb4712722238\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:16:30.840461 containerd[1599]: time="2026-01-17T00:16:30.840224020Z" level=error msg="Failed to destroy network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.841138 containerd[1599]: time="2026-01-17T00:16:30.840895567Z" level=error msg="encountered an error cleaning up failed sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.841138 containerd[1599]: time="2026-01-17T00:16:30.840959737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f9cc75987-n26vh,Uid:9a74e7d7-2baa-4c90-956f-f975e2acce52,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.842102 kubelet[2699]: E0117 00:16:30.842039 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.842305 kubelet[2699]: E0117 00:16:30.842262 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f9cc75987-n26vh" Jan 17 00:16:30.842427 kubelet[2699]: E0117 00:16:30.842398 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f9cc75987-n26vh" Jan 17 00:16:30.843118 kubelet[2699]: E0117 00:16:30.842674 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f9cc75987-n26vh_calico-system(9a74e7d7-2baa-4c90-956f-f975e2acce52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f9cc75987-n26vh_calico-system(9a74e7d7-2baa-4c90-956f-f975e2acce52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f9cc75987-n26vh" podUID="9a74e7d7-2baa-4c90-956f-f975e2acce52" Jan 17 00:16:30.892820 containerd[1599]: time="2026-01-17T00:16:30.891589913Z" level=error msg="Failed to destroy network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.892820 containerd[1599]: time="2026-01-17T00:16:30.892114323Z" level=error msg="encountered an error cleaning up failed sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.892820 containerd[1599]: time="2026-01-17T00:16:30.892178330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-g2rj5,Uid:cd6dbe24-c430-428d-92d9-91f581859d83,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.893893 kubelet[2699]: E0117 00:16:30.893363 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:30.893893 kubelet[2699]: E0117 00:16:30.893445 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" Jan 17 00:16:30.893893 kubelet[2699]: E0117 00:16:30.893501 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" Jan 17 00:16:30.894084 kubelet[2699]: E0117 00:16:30.893554 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6cc8d58d-g2rj5_calico-apiserver(cd6dbe24-c430-428d-92d9-91f581859d83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6cc8d58d-g2rj5_calico-apiserver(cd6dbe24-c430-428d-92d9-91f581859d83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:16:30.975977 containerd[1599]: time="2026-01-17T00:16:30.974401757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-8tc5j,Uid:43a11e4d-d5b2-4905-990b-145b7f453524,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:16:31.034043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a-shm.mount: Deactivated successfully. Jan 17 00:16:31.130515 containerd[1599]: time="2026-01-17T00:16:31.128046069Z" level=error msg="Failed to destroy network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:31.130515 containerd[1599]: time="2026-01-17T00:16:31.130238445Z" level=error msg="encountered an error cleaning up failed sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:31.130515 containerd[1599]: time="2026-01-17T00:16:31.130307062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-8tc5j,Uid:43a11e4d-d5b2-4905-990b-145b7f453524,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:31.133030 kubelet[2699]: E0117 00:16:31.132980 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:31.133173 kubelet[2699]: E0117 00:16:31.133052 2699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" Jan 17 00:16:31.133173 kubelet[2699]: E0117 00:16:31.133075 2699 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" Jan 17 00:16:31.133173 kubelet[2699]: E0117 00:16:31.133131 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6cc8d58d-8tc5j_calico-apiserver(43a11e4d-d5b2-4905-990b-145b7f453524)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6cc8d58d-8tc5j_calico-apiserver(43a11e4d-d5b2-4905-990b-145b7f453524)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:16:31.137342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4-shm.mount: Deactivated successfully. Jan 17 00:16:31.609822 kubelet[2699]: I0117 00:16:31.609771 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:31.614908 containerd[1599]: time="2026-01-17T00:16:31.613777256Z" level=info msg="StopPodSandbox for \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\"" Jan 17 00:16:31.614908 containerd[1599]: time="2026-01-17T00:16:31.614121305Z" level=info msg="Ensure that sandbox eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4 in task-service has been cleanup successfully" Jan 17 00:16:31.633669 kubelet[2699]: I0117 00:16:31.633613 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:31.635945 containerd[1599]: time="2026-01-17T00:16:31.635750850Z" level=info msg="StopPodSandbox for \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\"" Jan 17 00:16:31.636566 containerd[1599]: time="2026-01-17T00:16:31.636457767Z" level=info msg="Ensure that sandbox da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514 in task-service has been cleanup successfully" Jan 17 00:16:31.641853 kubelet[2699]: I0117 00:16:31.641743 2699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:31.645402 containerd[1599]: time="2026-01-17T00:16:31.644740766Z" level=info msg="StopPodSandbox for \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\"" Jan 17 00:16:31.645402 containerd[1599]: time="2026-01-17T00:16:31.645028955Z" level=info msg="Ensure that sandbox 44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a in task-service has been cleanup successfully" Jan 17 00:16:31.741225 containerd[1599]: time="2026-01-17T00:16:31.741175747Z" level=error msg="StopPodSandbox for \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\" failed" error="failed to destroy network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:31.741721 kubelet[2699]: E0117 00:16:31.741667 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:31.741787 kubelet[2699]: E0117 00:16:31.741747 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a"} Jan 17 00:16:31.741812 kubelet[2699]: E0117 00:16:31.741796 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a74e7d7-2baa-4c90-956f-f975e2acce52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:31.741902 kubelet[2699]: E0117 00:16:31.741863 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a74e7d7-2baa-4c90-956f-f975e2acce52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f9cc75987-n26vh" podUID="9a74e7d7-2baa-4c90-956f-f975e2acce52" Jan 17 00:16:31.742066 containerd[1599]: time="2026-01-17T00:16:31.742023188Z" level=error msg="StopPodSandbox for \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\" failed" error="failed to destroy network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:31.742278 kubelet[2699]: E0117 00:16:31.742241 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:31.742318 kubelet[2699]: E0117 00:16:31.742291 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4"} Jan 17 00:16:31.742352 kubelet[2699]: E0117 00:16:31.742333 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43a11e4d-d5b2-4905-990b-145b7f453524\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:31.742400 kubelet[2699]: E0117 00:16:31.742380 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43a11e4d-d5b2-4905-990b-145b7f453524\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:16:31.746225 containerd[1599]: time="2026-01-17T00:16:31.746153014Z" level=error msg="StopPodSandbox for \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\" failed" error="failed to destroy network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:31.746846 kubelet[2699]: E0117 00:16:31.746432 2699 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:31.746846 kubelet[2699]: E0117 00:16:31.746499 2699 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514"} Jan 17 00:16:31.746846 kubelet[2699]: E0117 00:16:31.746547 2699 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd6dbe24-c430-428d-92d9-91f581859d83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:31.746846 kubelet[2699]: E0117 00:16:31.746584 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd6dbe24-c430-428d-92d9-91f581859d83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:16:36.574143 systemd-journald[1142]: Under memory pressure, flushing caches. Jan 17 00:16:36.573478 systemd-resolved[1480]: Under memory pressure, flushing caches. Jan 17 00:16:36.573553 systemd-resolved[1480]: Flushed all caches. Jan 17 00:16:37.523694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731286562.mount: Deactivated successfully. Jan 17 00:16:37.604686 containerd[1599]: time="2026-01-17T00:16:37.604502303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.041886626s" Jan 17 00:16:37.604686 containerd[1599]: time="2026-01-17T00:16:37.604557836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:16:37.604686 containerd[1599]: time="2026-01-17T00:16:37.584079046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:16:37.611097 containerd[1599]: time="2026-01-17T00:16:37.611031868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:37.613356 containerd[1599]: time="2026-01-17T00:16:37.612956668Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:37.617089 containerd[1599]: time="2026-01-17T00:16:37.616444248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:37.677720 containerd[1599]: time="2026-01-17T00:16:37.677599641Z" level=info msg="CreateContainer within sandbox \"6d5dd44f5e7f61e747f8cab6f5931343758373dedbf9d8d35f4704ad3e5d0e88\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:16:37.721113 containerd[1599]: time="2026-01-17T00:16:37.721056048Z" level=info msg="CreateContainer within sandbox \"6d5dd44f5e7f61e747f8cab6f5931343758373dedbf9d8d35f4704ad3e5d0e88\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9606170b1cce07b0e5ed29bd403a6375de48a667d4675ad18b5987431e05039b\"" Jan 17 00:16:37.724243 containerd[1599]: time="2026-01-17T00:16:37.724068007Z" level=info msg="StartContainer for \"9606170b1cce07b0e5ed29bd403a6375de48a667d4675ad18b5987431e05039b\"" Jan 17 00:16:37.910971 containerd[1599]: time="2026-01-17T00:16:37.910314807Z" level=info msg="StartContainer for \"9606170b1cce07b0e5ed29bd403a6375de48a667d4675ad18b5987431e05039b\" returns successfully" Jan 17 00:16:38.059022 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:16:38.059178 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:16:38.376924 containerd[1599]: time="2026-01-17T00:16:38.376861966Z" level=info msg="StopPodSandbox for \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\"" Jan 17 00:16:38.508814 kubelet[2699]: I0117 00:16:38.507718 2699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:16:38.508814 kubelet[2699]: E0117 00:16:38.508148 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:38.628107 systemd-journald[1142]: Under memory pressure, flushing caches. Jan 17 00:16:38.621144 systemd-resolved[1480]: Under memory pressure, flushing caches. Jan 17 00:16:38.621153 systemd-resolved[1480]: Flushed all caches. Jan 17 00:16:38.706538 kubelet[2699]: E0117 00:16:38.706187 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:38.707696 kubelet[2699]: E0117 00:16:38.707317 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.503 [INFO][3834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.504 [INFO][3834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" iface="eth0" netns="/var/run/netns/cni-a1d11a09-8524-1b86-dd08-6b4d6d6c8ed0" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.507 [INFO][3834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" iface="eth0" netns="/var/run/netns/cni-a1d11a09-8524-1b86-dd08-6b4d6d6c8ed0" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.507 [INFO][3834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" iface="eth0" netns="/var/run/netns/cni-a1d11a09-8524-1b86-dd08-6b4d6d6c8ed0" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.507 [INFO][3834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.507 [INFO][3834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.841 [INFO][3841] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.850 [INFO][3841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.853 [INFO][3841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.888 [WARNING][3841] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.888 [INFO][3841] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.893 [INFO][3841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:38.905410 containerd[1599]: 2026-01-17 00:16:38.901 [INFO][3834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:38.909004 containerd[1599]: time="2026-01-17T00:16:38.905743680Z" level=info msg="TearDown network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\" successfully" Jan 17 00:16:38.909004 containerd[1599]: time="2026-01-17T00:16:38.905785506Z" level=info msg="StopPodSandbox for \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\" returns successfully" Jan 17 00:16:38.912868 systemd[1]: run-netns-cni\x2da1d11a09\x2d8524\x2d1b86\x2ddd08\x2d6b4d6d6c8ed0.mount: Deactivated successfully. Jan 17 00:16:39.097107 kubelet[2699]: I0117 00:16:39.096636 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb5hf\" (UniqueName: \"kubernetes.io/projected/9a74e7d7-2baa-4c90-956f-f975e2acce52-kube-api-access-jb5hf\") pod \"9a74e7d7-2baa-4c90-956f-f975e2acce52\" (UID: \"9a74e7d7-2baa-4c90-956f-f975e2acce52\") " Jan 17 00:16:39.097107 kubelet[2699]: I0117 00:16:39.096720 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-backend-key-pair\") pod \"9a74e7d7-2baa-4c90-956f-f975e2acce52\" (UID: \"9a74e7d7-2baa-4c90-956f-f975e2acce52\") " Jan 17 00:16:39.097107 kubelet[2699]: I0117 00:16:39.096788 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-ca-bundle\") pod \"9a74e7d7-2baa-4c90-956f-f975e2acce52\" (UID: \"9a74e7d7-2baa-4c90-956f-f975e2acce52\") " Jan 17 00:16:39.162290 kubelet[2699]: I0117 00:16:39.150630 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a74e7d7-2baa-4c90-956f-f975e2acce52" (UID: "9a74e7d7-2baa-4c90-956f-f975e2acce52"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:16:39.180387 kubelet[2699]: I0117 00:16:39.177534 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a74e7d7-2baa-4c90-956f-f975e2acce52-kube-api-access-jb5hf" (OuterVolumeSpecName: "kube-api-access-jb5hf") pod "9a74e7d7-2baa-4c90-956f-f975e2acce52" (UID: "9a74e7d7-2baa-4c90-956f-f975e2acce52"). InnerVolumeSpecName "kube-api-access-jb5hf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:16:39.178734 systemd[1]: var-lib-kubelet-pods-9a74e7d7\x2d2baa\x2d4c90\x2d956f\x2df975e2acce52-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:16:39.181610 kubelet[2699]: I0117 00:16:39.180952 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a74e7d7-2baa-4c90-956f-f975e2acce52" (UID: "9a74e7d7-2baa-4c90-956f-f975e2acce52"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:16:39.198416 systemd[1]: var-lib-kubelet-pods-9a74e7d7\x2d2baa\x2d4c90\x2d956f\x2df975e2acce52-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djb5hf.mount: Deactivated successfully. Jan 17 00:16:39.200275 kubelet[2699]: I0117 00:16:39.198746 2699 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jb5hf\" (UniqueName: \"kubernetes.io/projected/9a74e7d7-2baa-4c90-956f-f975e2acce52-kube-api-access-jb5hf\") on node \"ci-4081.3.6-n-cccb0c3e85\" DevicePath \"\"" Jan 17 00:16:39.200275 kubelet[2699]: I0117 00:16:39.198950 2699 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-cccb0c3e85\" DevicePath \"\"" Jan 17 00:16:39.200275 kubelet[2699]: I0117 00:16:39.198978 2699 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a74e7d7-2baa-4c90-956f-f975e2acce52-whisker-ca-bundle\") on node \"ci-4081.3.6-n-cccb0c3e85\" DevicePath \"\"" Jan 17 00:16:39.707899 kubelet[2699]: E0117 00:16:39.707689 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:39.753857 kubelet[2699]: I0117 00:16:39.751108 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2rdt2" podStartSLOduration=3.533865199 podStartE2EDuration="21.741615641s" podCreationTimestamp="2026-01-17 00:16:18 +0000 UTC" firstStartedPulling="2026-01-17 00:16:19.398340767 +0000 UTC m=+24.178771476" lastFinishedPulling="2026-01-17 00:16:37.60609122 +0000 UTC m=+42.386521918" observedRunningTime="2026-01-17 00:16:38.73871574 +0000 UTC m=+43.519146473" watchObservedRunningTime="2026-01-17 00:16:39.741615641 +0000 UTC m=+44.522046366" Jan 17 00:16:39.905819 kubelet[2699]: I0117 00:16:39.905744 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f056ee9-6914-4575-b585-f333a8c77da9-whisker-backend-key-pair\") pod \"whisker-8677d57b99-wp5xq\" (UID: \"2f056ee9-6914-4575-b585-f333a8c77da9\") " pod="calico-system/whisker-8677d57b99-wp5xq" Jan 17 00:16:39.905819 kubelet[2699]: I0117 00:16:39.905846 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f056ee9-6914-4575-b585-f333a8c77da9-whisker-ca-bundle\") pod \"whisker-8677d57b99-wp5xq\" (UID: \"2f056ee9-6914-4575-b585-f333a8c77da9\") " pod="calico-system/whisker-8677d57b99-wp5xq" Jan 17 00:16:39.905819 kubelet[2699]: I0117 00:16:39.905868 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczdt\" (UniqueName: \"kubernetes.io/projected/2f056ee9-6914-4575-b585-f333a8c77da9-kube-api-access-xczdt\") pod \"whisker-8677d57b99-wp5xq\" (UID: \"2f056ee9-6914-4575-b585-f333a8c77da9\") " pod="calico-system/whisker-8677d57b99-wp5xq" Jan 17 00:16:39.915938 kernel: bpftool[4029]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:16:40.143082 containerd[1599]: time="2026-01-17T00:16:40.142882284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8677d57b99-wp5xq,Uid:2f056ee9-6914-4575-b585-f333a8c77da9,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:40.357726 systemd-networkd[1225]: vxlan.calico: Link UP Jan 17 00:16:40.357858 systemd-networkd[1225]: vxlan.calico: Gained carrier Jan 17 00:16:40.539618 systemd-networkd[1225]: cali18c8540355b: Link UP Jan 17 00:16:40.542500 systemd-networkd[1225]: cali18c8540355b: Gained carrier Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.265 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0 whisker-8677d57b99- calico-system 2f056ee9-6914-4575-b585-f333a8c77da9 971 0 2026-01-17 00:16:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8677d57b99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 whisker-8677d57b99-wp5xq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali18c8540355b [] [] }} ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.265 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.425 [INFO][4059] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" HandleID="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.437 [INFO][4059] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" HandleID="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005202b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"whisker-8677d57b99-wp5xq", "timestamp":"2026-01-17 00:16:40.425726551 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.438 [INFO][4059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.438 [INFO][4059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.438 [INFO][4059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.457 [INFO][4059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.482 [INFO][4059] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.491 [INFO][4059] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.496 [INFO][4059] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.499 [INFO][4059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.499 [INFO][4059] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.501 [INFO][4059] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.507 [INFO][4059] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.518 [INFO][4059] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.129/26] block=192.168.19.128/26 handle="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.519 [INFO][4059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.129/26] handle="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.519 [INFO][4059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:40.569851 containerd[1599]: 2026-01-17 00:16:40.519 [INFO][4059] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.129/26] IPv6=[] ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" HandleID="k8s-pod-network.57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" Jan 17 00:16:40.573005 containerd[1599]: 2026-01-17 00:16:40.527 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0", GenerateName:"whisker-8677d57b99-", Namespace:"calico-system", SelfLink:"", UID:"2f056ee9-6914-4575-b585-f333a8c77da9", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8677d57b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"whisker-8677d57b99-wp5xq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali18c8540355b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:40.573005 containerd[1599]: 2026-01-17 00:16:40.527 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.129/32] ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" Jan 17 00:16:40.573005 containerd[1599]: 2026-01-17 00:16:40.527 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18c8540355b ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" Jan 17 00:16:40.573005 containerd[1599]: 2026-01-17 00:16:40.540 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" Jan 17 00:16:40.573005 containerd[1599]: 2026-01-17 00:16:40.540 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0", GenerateName:"whisker-8677d57b99-", Namespace:"calico-system", SelfLink:"", UID:"2f056ee9-6914-4575-b585-f333a8c77da9", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8677d57b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e", Pod:"whisker-8677d57b99-wp5xq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali18c8540355b", MAC:"ee:7e:9d:a1:43:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:40.573005 containerd[1599]: 2026-01-17 00:16:40.561 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e" Namespace="calico-system" Pod="whisker-8677d57b99-wp5xq" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--8677d57b99--wp5xq-eth0" Jan 17 00:16:40.610588 containerd[1599]: time="2026-01-17T00:16:40.609394781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:40.611861 containerd[1599]: time="2026-01-17T00:16:40.610479960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:40.611861 containerd[1599]: time="2026-01-17T00:16:40.610528375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:40.611861 containerd[1599]: time="2026-01-17T00:16:40.611704786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:40.708223 containerd[1599]: time="2026-01-17T00:16:40.708157410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8677d57b99-wp5xq,Uid:2f056ee9-6914-4575-b585-f333a8c77da9,Namespace:calico-system,Attempt:0,} returns sandbox id \"57a8341cebf0a3b77d2d22ab00315fdb0a38c902e41865d3aef298bce5d5fb4e\"" Jan 17 00:16:40.715195 containerd[1599]: time="2026-01-17T00:16:40.714743076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:16:41.039696 containerd[1599]: time="2026-01-17T00:16:41.039417315Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:41.050222 containerd[1599]: time="2026-01-17T00:16:41.040584594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:16:41.050420 containerd[1599]: time="2026-01-17T00:16:41.040628925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:16:41.054642 kubelet[2699]: E0117 00:16:41.054439 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:41.056544 kubelet[2699]: E0117 00:16:41.055403 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:41.078097 kubelet[2699]: E0117 00:16:41.077905 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2324e28db44d456388a17c04446e2b47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xczdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8677d57b99-wp5xq_calico-system(2f056ee9-6914-4575-b585-f333a8c77da9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:41.081250 containerd[1599]: time="2026-01-17T00:16:41.081199925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:16:41.368108 kubelet[2699]: I0117 00:16:41.367309 2699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a74e7d7-2baa-4c90-956f-f975e2acce52" path="/var/lib/kubelet/pods/9a74e7d7-2baa-4c90-956f-f975e2acce52/volumes" Jan 17 00:16:41.397370 containerd[1599]: time="2026-01-17T00:16:41.397210695Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:41.398359 containerd[1599]: time="2026-01-17T00:16:41.398302776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:16:41.398469 containerd[1599]: time="2026-01-17T00:16:41.398426896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:41.398735 kubelet[2699]: E0117 00:16:41.398661 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:41.398735 kubelet[2699]: E0117 00:16:41.398727 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:41.399151 kubelet[2699]: E0117 00:16:41.399033 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xczdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8677d57b99-wp5xq_calico-system(2f056ee9-6914-4575-b585-f333a8c77da9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:41.401021 kubelet[2699]: E0117 00:16:41.400959 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:16:41.719452 kubelet[2699]: E0117 00:16:41.719371 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:16:42.012275 systemd-networkd[1225]: vxlan.calico: Gained IPv6LL Jan 17 00:16:42.268458 systemd-networkd[1225]: cali18c8540355b: Gained IPv6LL Jan 17 00:16:42.367160 containerd[1599]: time="2026-01-17T00:16:42.366909464Z" level=info msg="StopPodSandbox for \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\"" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.444 [INFO][4188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.444 [INFO][4188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" iface="eth0" netns="/var/run/netns/cni-3e595b22-e450-2f83-0b11-d23d0ccfd8af" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.445 [INFO][4188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" iface="eth0" netns="/var/run/netns/cni-3e595b22-e450-2f83-0b11-d23d0ccfd8af" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.445 [INFO][4188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" iface="eth0" netns="/var/run/netns/cni-3e595b22-e450-2f83-0b11-d23d0ccfd8af" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.445 [INFO][4188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.445 [INFO][4188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.479 [INFO][4195] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.480 [INFO][4195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.480 [INFO][4195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.488 [WARNING][4195] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.488 [INFO][4195] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.490 [INFO][4195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:42.495873 containerd[1599]: 2026-01-17 00:16:42.493 [INFO][4188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:42.497582 containerd[1599]: time="2026-01-17T00:16:42.496273352Z" level=info msg="TearDown network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\" successfully" Jan 17 00:16:42.497582 containerd[1599]: time="2026-01-17T00:16:42.497433197Z" level=info msg="StopPodSandbox for \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\" returns successfully" Jan 17 00:16:42.498945 containerd[1599]: time="2026-01-17T00:16:42.498818555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvltb,Uid:fe4a7e29-720a-4e34-a53e-e9187d031f57,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:42.503723 systemd[1]: run-netns-cni\x2d3e595b22\x2de450\x2d2f83\x2d0b11\x2dd23d0ccfd8af.mount: Deactivated successfully. Jan 17 00:16:42.667155 systemd-networkd[1225]: cali6e58505b48a: Link UP Jan 17 00:16:42.671026 systemd-networkd[1225]: cali6e58505b48a: Gained carrier Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.568 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0 csi-node-driver- calico-system fe4a7e29-720a-4e34-a53e-e9187d031f57 994 0 2026-01-17 00:16:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 csi-node-driver-pvltb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6e58505b48a [] [] }} ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.568 [INFO][4206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.605 [INFO][4215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" HandleID="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.605 [INFO][4215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" HandleID="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"csi-node-driver-pvltb", "timestamp":"2026-01-17 00:16:42.605217096 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.605 [INFO][4215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.605 [INFO][4215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.605 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.614 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.621 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.628 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.633 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.636 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.637 [INFO][4215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.639 [INFO][4215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8 Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.645 [INFO][4215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.654 [INFO][4215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.130/26] block=192.168.19.128/26 handle="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.654 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.130/26] handle="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.654 [INFO][4215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:42.692912 containerd[1599]: 2026-01-17 00:16:42.655 [INFO][4215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.130/26] IPv6=[] ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" HandleID="k8s-pod-network.58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.693619 containerd[1599]: 2026-01-17 00:16:42.658 [INFO][4206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe4a7e29-720a-4e34-a53e-e9187d031f57", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"csi-node-driver-pvltb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e58505b48a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:42.693619 containerd[1599]: 2026-01-17 00:16:42.658 [INFO][4206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.130/32] ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.693619 containerd[1599]: 2026-01-17 00:16:42.658 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e58505b48a ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.693619 containerd[1599]: 2026-01-17 00:16:42.672 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.693619 containerd[1599]: 2026-01-17 00:16:42.673 [INFO][4206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe4a7e29-720a-4e34-a53e-e9187d031f57", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8", Pod:"csi-node-driver-pvltb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e58505b48a", MAC:"7e:17:d6:f9:20:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:42.693619 containerd[1599]: 2026-01-17 00:16:42.688 [INFO][4206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8" Namespace="calico-system" Pod="csi-node-driver-pvltb" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:42.727569 kubelet[2699]: E0117 00:16:42.727103 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:16:42.743318 containerd[1599]: time="2026-01-17T00:16:42.742898688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:42.743318 containerd[1599]: time="2026-01-17T00:16:42.743283976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:42.743318 containerd[1599]: time="2026-01-17T00:16:42.743319030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:42.744264 containerd[1599]: time="2026-01-17T00:16:42.743521507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:42.826460 containerd[1599]: time="2026-01-17T00:16:42.826400576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvltb,Uid:fe4a7e29-720a-4e34-a53e-e9187d031f57,Namespace:calico-system,Attempt:1,} returns sandbox id \"58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8\"" Jan 17 00:16:42.828630 containerd[1599]: time="2026-01-17T00:16:42.828578708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:16:43.173948 containerd[1599]: time="2026-01-17T00:16:43.173850781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:43.175012 containerd[1599]: time="2026-01-17T00:16:43.174853549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:16:43.175012 containerd[1599]: time="2026-01-17T00:16:43.174939530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:16:43.175589 kubelet[2699]: E0117 00:16:43.175310 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:43.175589 kubelet[2699]: E0117 00:16:43.175368 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:43.175589 kubelet[2699]: E0117 00:16:43.175511 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:43.179467 containerd[1599]: time="2026-01-17T00:16:43.179347010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:16:43.368176 containerd[1599]: time="2026-01-17T00:16:43.366071103Z" level=info msg="StopPodSandbox for \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\"" Jan 17 00:16:43.368176 containerd[1599]: time="2026-01-17T00:16:43.367699277Z" level=info msg="StopPodSandbox for \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\"" Jan 17 00:16:43.527653 containerd[1599]: time="2026-01-17T00:16:43.527585770Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:43.535561 containerd[1599]: time="2026-01-17T00:16:43.534945232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:16:43.537032 containerd[1599]: time="2026-01-17T00:16:43.534973133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:16:43.540431 kubelet[2699]: E0117 00:16:43.536350 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:43.540431 kubelet[2699]: E0117 00:16:43.536641 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:43.544991 kubelet[2699]: E0117 00:16:43.538698 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:43.547426 kubelet[2699]: E0117 00:16:43.547338 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.479 [INFO][4289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.479 [INFO][4289] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" iface="eth0" netns="/var/run/netns/cni-8979f241-1689-1e24-57a8-7de1c9b4d629" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.480 [INFO][4289] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" iface="eth0" netns="/var/run/netns/cni-8979f241-1689-1e24-57a8-7de1c9b4d629" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.481 [INFO][4289] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" iface="eth0" netns="/var/run/netns/cni-8979f241-1689-1e24-57a8-7de1c9b4d629" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.481 [INFO][4289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.481 [INFO][4289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.561 [INFO][4304] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.561 [INFO][4304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.561 [INFO][4304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.569 [WARNING][4304] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.569 [INFO][4304] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.573 [INFO][4304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:43.585970 containerd[1599]: 2026-01-17 00:16:43.580 [INFO][4289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:43.588621 containerd[1599]: time="2026-01-17T00:16:43.588090225Z" level=info msg="TearDown network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\" successfully" Jan 17 00:16:43.588621 containerd[1599]: time="2026-01-17T00:16:43.588145126Z" level=info msg="StopPodSandbox for \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\" returns successfully" Jan 17 00:16:43.591006 kubelet[2699]: E0117 00:16:43.589191 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:43.593711 containerd[1599]: time="2026-01-17T00:16:43.593145851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59mdc,Uid:b0c84ef6-254a-45d6-83f8-3efb7d2d1036,Namespace:kube-system,Attempt:1,}" Jan 17 00:16:43.597647 systemd[1]: run-netns-cni\x2d8979f241\x2d1689\x2d1e24\x2d57a8\x2d7de1c9b4d629.mount: Deactivated successfully. Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.467 [INFO][4288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.470 [INFO][4288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" iface="eth0" netns="/var/run/netns/cni-bc6fadc8-7db4-8256-3828-be2c7daf4a27" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.470 [INFO][4288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" iface="eth0" netns="/var/run/netns/cni-bc6fadc8-7db4-8256-3828-be2c7daf4a27" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.471 [INFO][4288] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" iface="eth0" netns="/var/run/netns/cni-bc6fadc8-7db4-8256-3828-be2c7daf4a27" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.471 [INFO][4288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.471 [INFO][4288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.562 [INFO][4302] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.562 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.573 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.588 [WARNING][4302] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.588 [INFO][4302] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.595 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:43.616612 containerd[1599]: 2026-01-17 00:16:43.606 [INFO][4288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:43.620561 containerd[1599]: time="2026-01-17T00:16:43.619550468Z" level=info msg="TearDown network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\" successfully" Jan 17 00:16:43.620561 containerd[1599]: time="2026-01-17T00:16:43.619604946Z" level=info msg="StopPodSandbox for \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\" returns successfully" Jan 17 00:16:43.635948 containerd[1599]: time="2026-01-17T00:16:43.635027578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cd6lg,Uid:96b65c17-4b2e-4680-86fb-3425314d6580,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:43.641120 systemd[1]: run-netns-cni\x2dbc6fadc8\x2d7db4\x2d8256\x2d3828\x2dbe2c7daf4a27.mount: Deactivated successfully. Jan 17 00:16:43.746695 kubelet[2699]: E0117 00:16:43.744567 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:43.906557 systemd-networkd[1225]: cali1550eb82cd9: Link UP Jan 17 00:16:43.909091 systemd-networkd[1225]: cali1550eb82cd9: Gained carrier Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.773 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0 coredns-668d6bf9bc- kube-system b0c84ef6-254a-45d6-83f8-3efb7d2d1036 1013 0 2026-01-17 00:16:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 coredns-668d6bf9bc-59mdc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1550eb82cd9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.774 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.816 [INFO][4340] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" HandleID="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.817 [INFO][4340] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" HandleID="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"coredns-668d6bf9bc-59mdc", "timestamp":"2026-01-17 00:16:43.81686547 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.817 [INFO][4340] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.817 [INFO][4340] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.817 [INFO][4340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.833 [INFO][4340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.844 [INFO][4340] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.855 [INFO][4340] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.861 [INFO][4340] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.869 [INFO][4340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.869 [INFO][4340] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.873 [INFO][4340] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10 Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.882 [INFO][4340] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.891 [INFO][4340] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.131/26] block=192.168.19.128/26 handle="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.892 [INFO][4340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.131/26] handle="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.892 [INFO][4340] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:43.949818 containerd[1599]: 2026-01-17 00:16:43.892 [INFO][4340] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.131/26] IPv6=[] ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" HandleID="k8s-pod-network.0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.954013 containerd[1599]: 2026-01-17 00:16:43.899 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0c84ef6-254a-45d6-83f8-3efb7d2d1036", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"coredns-668d6bf9bc-59mdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1550eb82cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:43.954013 containerd[1599]: 2026-01-17 00:16:43.900 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.131/32] ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.954013 containerd[1599]: 2026-01-17 00:16:43.900 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1550eb82cd9 ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.954013 containerd[1599]: 2026-01-17 00:16:43.910 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:43.954013 containerd[1599]: 2026-01-17 00:16:43.912 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0c84ef6-254a-45d6-83f8-3efb7d2d1036", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10", Pod:"coredns-668d6bf9bc-59mdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1550eb82cd9", MAC:"fe:a3:3e:de:29:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:43.954281 containerd[1599]: 2026-01-17 00:16:43.934 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10" Namespace="kube-system" Pod="coredns-668d6bf9bc-59mdc" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:44.079931 containerd[1599]: time="2026-01-17T00:16:44.073172748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:44.079931 containerd[1599]: time="2026-01-17T00:16:44.073241047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:44.079931 containerd[1599]: time="2026-01-17T00:16:44.073257158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:44.079931 containerd[1599]: time="2026-01-17T00:16:44.073363978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:44.098876 systemd-networkd[1225]: calic9d29ec41b7: Link UP Jan 17 00:16:44.117949 systemd-networkd[1225]: calic9d29ec41b7: Gained carrier Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.848 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0 goldmane-666569f655- calico-system 96b65c17-4b2e-4680-86fb-3425314d6580 1012 0 2026-01-17 00:16:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 goldmane-666569f655-cd6lg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic9d29ec41b7 [] [] }} ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.848 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.961 [INFO][4349] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" HandleID="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.962 [INFO][4349] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" HandleID="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003580c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"goldmane-666569f655-cd6lg", "timestamp":"2026-01-17 00:16:43.96165167 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.962 [INFO][4349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.962 [INFO][4349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.962 [INFO][4349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.973 [INFO][4349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.982 [INFO][4349] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:43.998 [INFO][4349] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.006 [INFO][4349] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.011 [INFO][4349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.012 [INFO][4349] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.015 [INFO][4349] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4 Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.028 [INFO][4349] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.055 [INFO][4349] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.132/26] block=192.168.19.128/26 handle="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.056 [INFO][4349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.132/26] handle="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.057 [INFO][4349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:44.175034 containerd[1599]: 2026-01-17 00:16:44.057 [INFO][4349] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.132/26] IPv6=[] ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" HandleID="k8s-pod-network.304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:44.184181 containerd[1599]: 2026-01-17 00:16:44.066 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"96b65c17-4b2e-4680-86fb-3425314d6580", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"goldmane-666569f655-cd6lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9d29ec41b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:44.184181 containerd[1599]: 2026-01-17 00:16:44.066 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.132/32] ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:44.184181 containerd[1599]: 2026-01-17 00:16:44.066 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9d29ec41b7 ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:44.184181 containerd[1599]: 2026-01-17 00:16:44.120 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:44.184181 containerd[1599]: 2026-01-17 00:16:44.123 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"96b65c17-4b2e-4680-86fb-3425314d6580", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4", Pod:"goldmane-666569f655-cd6lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9d29ec41b7", MAC:"3e:77:27:26:d8:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:44.184181 containerd[1599]: 2026-01-17 00:16:44.165 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4" Namespace="calico-system" Pod="goldmane-666569f655-cd6lg" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:44.218030 containerd[1599]: time="2026-01-17T00:16:44.215234774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-59mdc,Uid:b0c84ef6-254a-45d6-83f8-3efb7d2d1036,Namespace:kube-system,Attempt:1,} returns sandbox id \"0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10\"" Jan 17 00:16:44.222177 kubelet[2699]: E0117 00:16:44.222120 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:44.228713 containerd[1599]: time="2026-01-17T00:16:44.228648888Z" level=info msg="CreateContainer within sandbox \"0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:16:44.250934 containerd[1599]: time="2026-01-17T00:16:44.246278570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:44.250934 containerd[1599]: time="2026-01-17T00:16:44.246373862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:44.250934 containerd[1599]: time="2026-01-17T00:16:44.246392468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:44.250934 containerd[1599]: time="2026-01-17T00:16:44.246539584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:44.281939 containerd[1599]: time="2026-01-17T00:16:44.279578774Z" level=info msg="CreateContainer within sandbox \"0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"625ca9122f41ac05d1c6b243013d1544b4c998c7a04bef50af69e9367399fc95\"" Jan 17 00:16:44.291919 containerd[1599]: time="2026-01-17T00:16:44.290158543Z" level=info msg="StartContainer for \"625ca9122f41ac05d1c6b243013d1544b4c998c7a04bef50af69e9367399fc95\"" Jan 17 00:16:44.366949 containerd[1599]: time="2026-01-17T00:16:44.366897054Z" level=info msg="StopPodSandbox for \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\"" Jan 17 00:16:44.368874 containerd[1599]: time="2026-01-17T00:16:44.367935649Z" level=info msg="StopPodSandbox for \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\"" Jan 17 00:16:44.573058 systemd-networkd[1225]: cali6e58505b48a: Gained IPv6LL Jan 17 00:16:44.628187 containerd[1599]: time="2026-01-17T00:16:44.627224184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cd6lg,Uid:96b65c17-4b2e-4680-86fb-3425314d6580,Namespace:calico-system,Attempt:1,} returns sandbox id \"304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4\"" Jan 17 00:16:44.639329 containerd[1599]: time="2026-01-17T00:16:44.639288719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:16:44.662410 containerd[1599]: time="2026-01-17T00:16:44.662291522Z" level=info msg="StartContainer for \"625ca9122f41ac05d1c6b243013d1544b4c998c7a04bef50af69e9367399fc95\" returns successfully" Jan 17 00:16:44.776643 kubelet[2699]: E0117 00:16:44.776596 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:44.816237 kubelet[2699]: E0117 00:16:44.815438 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.663 [INFO][4496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.666 [INFO][4496] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" iface="eth0" netns="/var/run/netns/cni-3398d737-7587-5779-75fa-ec8027a92679" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.667 [INFO][4496] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" iface="eth0" netns="/var/run/netns/cni-3398d737-7587-5779-75fa-ec8027a92679" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.667 [INFO][4496] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" iface="eth0" netns="/var/run/netns/cni-3398d737-7587-5779-75fa-ec8027a92679" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.667 [INFO][4496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.667 [INFO][4496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.783 [INFO][4519] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.798 [INFO][4519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.798 [INFO][4519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.828 [WARNING][4519] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.828 [INFO][4519] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.834 [INFO][4519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:44.867339 containerd[1599]: 2026-01-17 00:16:44.855 [INFO][4496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:44.867339 containerd[1599]: time="2026-01-17T00:16:44.864325413Z" level=info msg="TearDown network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\" successfully" Jan 17 00:16:44.867339 containerd[1599]: time="2026-01-17T00:16:44.864492004Z" level=info msg="StopPodSandbox for \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\" returns successfully" Jan 17 00:16:44.879620 systemd[1]: run-netns-cni\x2d3398d737\x2d7587\x2d5779\x2d75fa\x2dec8027a92679.mount: Deactivated successfully. Jan 17 00:16:44.883330 kubelet[2699]: E0117 00:16:44.880443 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:44.884011 containerd[1599]: time="2026-01-17T00:16:44.883808270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nql5,Uid:ec675aa1-75e7-4358-af19-bc10fabdfd85,Namespace:kube-system,Attempt:1,}" Jan 17 00:16:44.927649 kubelet[2699]: I0117 00:16:44.924874 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-59mdc" podStartSLOduration=42.924845389 podStartE2EDuration="42.924845389s" podCreationTimestamp="2026-01-17 00:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:44.922312315 +0000 UTC m=+49.702743045" watchObservedRunningTime="2026-01-17 00:16:44.924845389 +0000 UTC m=+49.705276121" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.690 [INFO][4488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.691 [INFO][4488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" iface="eth0" netns="/var/run/netns/cni-10419b36-37f7-0a62-a0c0-d884ef2068a7" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.691 [INFO][4488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" iface="eth0" netns="/var/run/netns/cni-10419b36-37f7-0a62-a0c0-d884ef2068a7" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.692 [INFO][4488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" iface="eth0" netns="/var/run/netns/cni-10419b36-37f7-0a62-a0c0-d884ef2068a7" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.692 [INFO][4488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.692 [INFO][4488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.896 [INFO][4527] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.898 [INFO][4527] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.898 [INFO][4527] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.924 [WARNING][4527] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.926 [INFO][4527] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.934 [INFO][4527] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:44.986595 containerd[1599]: 2026-01-17 00:16:44.938 [INFO][4488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:44.988184 containerd[1599]: time="2026-01-17T00:16:44.987935163Z" level=info msg="TearDown network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\" successfully" Jan 17 00:16:44.988184 containerd[1599]: time="2026-01-17T00:16:44.987978782Z" level=info msg="StopPodSandbox for \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\" returns successfully" Jan 17 00:16:44.988184 containerd[1599]: time="2026-01-17T00:16:44.988170642Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:44.989703 containerd[1599]: time="2026-01-17T00:16:44.989642812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:16:44.996246 containerd[1599]: time="2026-01-17T00:16:44.989801286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:44.996246 containerd[1599]: time="2026-01-17T00:16:44.993388077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4ffb8bcd-m826d,Uid:7b8b1bac-c0de-45cb-b647-eb4712722238,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:44.995342 systemd[1]: run-netns-cni\x2d10419b36\x2d37f7\x2d0a62\x2da0c0\x2dd884ef2068a7.mount: Deactivated successfully. Jan 17 00:16:44.996436 kubelet[2699]: E0117 00:16:44.990978 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:44.996436 kubelet[2699]: E0117 00:16:44.991023 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:44.996436 kubelet[2699]: E0117 00:16:44.993522 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9cvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cd6lg_calico-system(96b65c17-4b2e-4680-86fb-3425314d6580): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:44.998890 kubelet[2699]: E0117 00:16:44.996769 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:16:45.217382 systemd-networkd[1225]: cali1d57617aa9d: Link UP Jan 17 00:16:45.220637 systemd-networkd[1225]: cali1d57617aa9d: Gained carrier Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.079 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0 coredns-668d6bf9bc- kube-system ec675aa1-75e7-4358-af19-bc10fabdfd85 1039 0 2026-01-17 00:16:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 coredns-668d6bf9bc-5nql5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d57617aa9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.084 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.135 [INFO][4565] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" HandleID="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.135 [INFO][4565] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" HandleID="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"coredns-668d6bf9bc-5nql5", "timestamp":"2026-01-17 00:16:45.135273078 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.135 [INFO][4565] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.135 [INFO][4565] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.135 [INFO][4565] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.147 [INFO][4565] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.161 [INFO][4565] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.172 [INFO][4565] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.176 [INFO][4565] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.182 [INFO][4565] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.182 [INFO][4565] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.186 [INFO][4565] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8 Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.192 [INFO][4565] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.201 [INFO][4565] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.133/26] block=192.168.19.128/26 handle="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.201 [INFO][4565] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.133/26] handle="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.201 [INFO][4565] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:45.245795 containerd[1599]: 2026-01-17 00:16:45.202 [INFO][4565] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.133/26] IPv6=[] ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" HandleID="k8s-pod-network.ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:45.246534 containerd[1599]: 2026-01-17 00:16:45.208 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ec675aa1-75e7-4358-af19-bc10fabdfd85", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"coredns-668d6bf9bc-5nql5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d57617aa9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:45.246534 containerd[1599]: 2026-01-17 00:16:45.209 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.133/32] ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:45.246534 containerd[1599]: 2026-01-17 00:16:45.209 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d57617aa9d ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:45.246534 containerd[1599]: 2026-01-17 00:16:45.224 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:45.246534 containerd[1599]: 2026-01-17 00:16:45.224 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ec675aa1-75e7-4358-af19-bc10fabdfd85", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8", Pod:"coredns-668d6bf9bc-5nql5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d57617aa9d", MAC:"9e:dd:92:c8:c9:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:45.246823 containerd[1599]: 2026-01-17 00:16:45.241 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nql5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:45.290047 containerd[1599]: time="2026-01-17T00:16:45.289517181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:45.290047 containerd[1599]: time="2026-01-17T00:16:45.289597690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:45.290047 containerd[1599]: time="2026-01-17T00:16:45.289669169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.292638 containerd[1599]: time="2026-01-17T00:16:45.292334534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.348987 systemd-networkd[1225]: cali23597c14d1f: Link UP Jan 17 00:16:45.351084 systemd-networkd[1225]: cali23597c14d1f: Gained carrier Jan 17 00:16:45.381865 containerd[1599]: time="2026-01-17T00:16:45.378767764Z" level=info msg="StopPodSandbox for \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\"" Jan 17 00:16:45.404490 systemd-networkd[1225]: cali1550eb82cd9: Gained IPv6LL Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.105 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0 calico-kube-controllers-7d4ffb8bcd- calico-system 7b8b1bac-c0de-45cb-b647-eb4712722238 1041 0 2026-01-17 00:16:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d4ffb8bcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 calico-kube-controllers-7d4ffb8bcd-m826d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali23597c14d1f [] [] }} ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.105 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.167 [INFO][4570] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" HandleID="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.168 [INFO][4570] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" HandleID="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"calico-kube-controllers-7d4ffb8bcd-m826d", "timestamp":"2026-01-17 00:16:45.167234522 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.168 [INFO][4570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.202 [INFO][4570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.202 [INFO][4570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.250 [INFO][4570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.265 [INFO][4570] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.282 [INFO][4570] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.290 [INFO][4570] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.296 [INFO][4570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.296 [INFO][4570] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.299 [INFO][4570] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99 Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.310 [INFO][4570] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.323 [INFO][4570] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.134/26] block=192.168.19.128/26 handle="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.323 [INFO][4570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.134/26] handle="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.327 [INFO][4570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:45.438347 containerd[1599]: 2026-01-17 00:16:45.327 [INFO][4570] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.134/26] IPv6=[] ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" HandleID="k8s-pod-network.0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:45.442317 containerd[1599]: 2026-01-17 00:16:45.337 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0", GenerateName:"calico-kube-controllers-7d4ffb8bcd-", Namespace:"calico-system", SelfLink:"", UID:"7b8b1bac-c0de-45cb-b647-eb4712722238", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4ffb8bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"calico-kube-controllers-7d4ffb8bcd-m826d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23597c14d1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:45.442317 containerd[1599]: 2026-01-17 00:16:45.337 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.134/32] ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:45.442317 containerd[1599]: 2026-01-17 00:16:45.337 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23597c14d1f ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:45.442317 containerd[1599]: 2026-01-17 00:16:45.362 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:45.442317 containerd[1599]: 2026-01-17 00:16:45.363 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0", GenerateName:"calico-kube-controllers-7d4ffb8bcd-", Namespace:"calico-system", SelfLink:"", UID:"7b8b1bac-c0de-45cb-b647-eb4712722238", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4ffb8bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99", Pod:"calico-kube-controllers-7d4ffb8bcd-m826d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23597c14d1f", MAC:"c6:79:cb:b6:c2:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:45.442317 containerd[1599]: 2026-01-17 00:16:45.393 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99" Namespace="calico-system" Pod="calico-kube-controllers-7d4ffb8bcd-m826d" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:45.450474 containerd[1599]: time="2026-01-17T00:16:45.450362988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nql5,Uid:ec675aa1-75e7-4358-af19-bc10fabdfd85,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8\"" Jan 17 00:16:45.452598 kubelet[2699]: E0117 00:16:45.451694 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:45.456583 containerd[1599]: time="2026-01-17T00:16:45.456430656Z" level=info msg="CreateContainer within sandbox \"ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:16:45.494143 containerd[1599]: time="2026-01-17T00:16:45.493992749Z" level=info msg="CreateContainer within sandbox \"ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7bcd04e3b62c5bb5bbd390a42304d2ae7aeee2a735a725c4f67557c355c49e10\"" Jan 17 00:16:45.498108 containerd[1599]: time="2026-01-17T00:16:45.498055501Z" level=info msg="StartContainer for \"7bcd04e3b62c5bb5bbd390a42304d2ae7aeee2a735a725c4f67557c355c49e10\"" Jan 17 00:16:45.513114 containerd[1599]: time="2026-01-17T00:16:45.512555920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:45.513114 containerd[1599]: time="2026-01-17T00:16:45.512618138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:45.513114 containerd[1599]: time="2026-01-17T00:16:45.512635379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.513114 containerd[1599]: time="2026-01-17T00:16:45.512748495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.598460 systemd-networkd[1225]: calic9d29ec41b7: Gained IPv6LL Jan 17 00:16:45.739465 containerd[1599]: time="2026-01-17T00:16:45.738283547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4ffb8bcd-m826d,Uid:7b8b1bac-c0de-45cb-b647-eb4712722238,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99\"" Jan 17 00:16:45.768117 containerd[1599]: time="2026-01-17T00:16:45.766418569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:16:45.787760 containerd[1599]: time="2026-01-17T00:16:45.787569811Z" level=info msg="StartContainer for \"7bcd04e3b62c5bb5bbd390a42304d2ae7aeee2a735a725c4f67557c355c49e10\" returns successfully" Jan 17 00:16:45.824602 kubelet[2699]: E0117 00:16:45.823358 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:45.826900 kubelet[2699]: E0117 00:16:45.826875 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:45.834657 kubelet[2699]: E0117 00:16:45.834597 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.564 [INFO][4643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.565 [INFO][4643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" iface="eth0" netns="/var/run/netns/cni-9b5488ba-4ff5-bb39-1f31-0fe20860bb79" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.565 [INFO][4643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" iface="eth0" netns="/var/run/netns/cni-9b5488ba-4ff5-bb39-1f31-0fe20860bb79" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.565 [INFO][4643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" iface="eth0" netns="/var/run/netns/cni-9b5488ba-4ff5-bb39-1f31-0fe20860bb79" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.565 [INFO][4643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.565 [INFO][4643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.764 [INFO][4705] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.764 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.764 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.819 [WARNING][4705] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.820 [INFO][4705] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.841 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:45.853177 containerd[1599]: 2026-01-17 00:16:45.846 [INFO][4643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:45.857715 containerd[1599]: time="2026-01-17T00:16:45.853341158Z" level=info msg="TearDown network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\" successfully" Jan 17 00:16:45.857715 containerd[1599]: time="2026-01-17T00:16:45.853370904Z" level=info msg="StopPodSandbox for \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\" returns successfully" Jan 17 00:16:45.857715 containerd[1599]: time="2026-01-17T00:16:45.854371678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-8tc5j,Uid:43a11e4d-d5b2-4905-990b-145b7f453524,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:16:45.873604 systemd[1]: run-netns-cni\x2d9b5488ba\x2d4ff5\x2dbb39\x2d1f31\x2d0fe20860bb79.mount: Deactivated successfully. Jan 17 00:16:45.946451 kubelet[2699]: I0117 00:16:45.945939 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5nql5" podStartSLOduration=44.945917905 podStartE2EDuration="44.945917905s" podCreationTimestamp="2026-01-17 00:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:45.925400661 +0000 UTC m=+50.705831384" watchObservedRunningTime="2026-01-17 00:16:45.945917905 +0000 UTC m=+50.726348618" Jan 17 00:16:46.146116 systemd-networkd[1225]: cali87fc98b91e1: Link UP Jan 17 00:16:46.149388 systemd-networkd[1225]: cali87fc98b91e1: Gained carrier Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.038 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0 calico-apiserver-6c6cc8d58d- calico-apiserver 43a11e4d-d5b2-4905-990b-145b7f453524 1067 0 2026-01-17 00:16:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6cc8d58d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 calico-apiserver-6c6cc8d58d-8tc5j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87fc98b91e1 [] [] }} ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.039 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.089 [INFO][4765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" HandleID="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.089 [INFO][4765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" HandleID="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cafe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"calico-apiserver-6c6cc8d58d-8tc5j", "timestamp":"2026-01-17 00:16:46.08946263 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.090 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.090 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.090 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.098 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.104 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.111 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.114 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.118 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.118 [INFO][4765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.120 [INFO][4765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32 Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.124 [INFO][4765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.137 [INFO][4765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.135/26] block=192.168.19.128/26 handle="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.138 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.135/26] handle="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.138 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:46.172976 containerd[1599]: 2026-01-17 00:16:46.138 [INFO][4765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.135/26] IPv6=[] ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" HandleID="k8s-pod-network.79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:46.173635 containerd[1599]: 2026-01-17 00:16:46.141 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43a11e4d-d5b2-4905-990b-145b7f453524", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"calico-apiserver-6c6cc8d58d-8tc5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87fc98b91e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:46.173635 containerd[1599]: 2026-01-17 00:16:46.141 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.135/32] ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:46.173635 containerd[1599]: 2026-01-17 00:16:46.141 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87fc98b91e1 ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:46.173635 containerd[1599]: 2026-01-17 00:16:46.150 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:46.173635 containerd[1599]: 2026-01-17 00:16:46.151 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43a11e4d-d5b2-4905-990b-145b7f453524", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32", Pod:"calico-apiserver-6c6cc8d58d-8tc5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87fc98b91e1", MAC:"be:ff:9d:75:e4:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:46.173635 containerd[1599]: 2026-01-17 00:16:46.167 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-8tc5j" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:46.203156 containerd[1599]: time="2026-01-17T00:16:46.201658375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:46.203156 containerd[1599]: time="2026-01-17T00:16:46.201737734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:46.203156 containerd[1599]: time="2026-01-17T00:16:46.201754238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:46.203156 containerd[1599]: time="2026-01-17T00:16:46.201872516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:46.236961 containerd[1599]: time="2026-01-17T00:16:46.236422008Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:46.238123 containerd[1599]: time="2026-01-17T00:16:46.238053561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:16:46.238344 containerd[1599]: time="2026-01-17T00:16:46.238222208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:46.238715 kubelet[2699]: E0117 00:16:46.238489 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:46.239105 kubelet[2699]: E0117 00:16:46.239069 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:46.240463 kubelet[2699]: E0117 00:16:46.239478 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xccxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d4ffb8bcd-m826d_calico-system(7b8b1bac-c0de-45cb-b647-eb4712722238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:46.241590 kubelet[2699]: E0117 00:16:46.241542 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:16:46.289994 containerd[1599]: time="2026-01-17T00:16:46.289764111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-8tc5j,Uid:43a11e4d-d5b2-4905-990b-145b7f453524,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32\"" Jan 17 00:16:46.293249 containerd[1599]: time="2026-01-17T00:16:46.292648206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:46.366271 containerd[1599]: time="2026-01-17T00:16:46.365782568Z" level=info msg="StopPodSandbox for \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\"" Jan 17 00:16:46.430153 systemd-networkd[1225]: cali23597c14d1f: Gained IPv6LL Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.436 [INFO][4832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.436 [INFO][4832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" iface="eth0" netns="/var/run/netns/cni-aecbc4d5-95f3-fe6b-7036-36b8b8c55b82" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.437 [INFO][4832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" iface="eth0" netns="/var/run/netns/cni-aecbc4d5-95f3-fe6b-7036-36b8b8c55b82" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.439 [INFO][4832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" iface="eth0" netns="/var/run/netns/cni-aecbc4d5-95f3-fe6b-7036-36b8b8c55b82" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.439 [INFO][4832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.439 [INFO][4832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.470 [INFO][4839] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.470 [INFO][4839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.470 [INFO][4839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.482 [WARNING][4839] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.482 [INFO][4839] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.484 [INFO][4839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:46.489124 containerd[1599]: 2026-01-17 00:16:46.486 [INFO][4832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:46.489704 containerd[1599]: time="2026-01-17T00:16:46.489456544Z" level=info msg="TearDown network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\" successfully" Jan 17 00:16:46.489704 containerd[1599]: time="2026-01-17T00:16:46.489500644Z" level=info msg="StopPodSandbox for \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\" returns successfully" Jan 17 00:16:46.491383 containerd[1599]: time="2026-01-17T00:16:46.491319786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-g2rj5,Uid:cd6dbe24-c430-428d-92d9-91f581859d83,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:16:46.598032 systemd[1]: run-netns-cni\x2daecbc4d5\x2d95f3\x2dfe6b\x2d7036\x2d36b8b8c55b82.mount: Deactivated successfully. Jan 17 00:16:46.618350 containerd[1599]: time="2026-01-17T00:16:46.618296396Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:46.620538 containerd[1599]: time="2026-01-17T00:16:46.620468270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:46.620645 containerd[1599]: time="2026-01-17T00:16:46.620595133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:46.621000 kubelet[2699]: E0117 00:16:46.620795 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:46.621083 kubelet[2699]: E0117 00:16:46.620948 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:46.621483 kubelet[2699]: E0117 00:16:46.621424 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzb95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6cc8d58d-8tc5j_calico-apiserver(43a11e4d-d5b2-4905-990b-145b7f453524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:46.623618 kubelet[2699]: E0117 00:16:46.623545 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:16:46.665250 systemd-networkd[1225]: cali0ae6a97716e: Link UP Jan 17 00:16:46.666483 systemd-networkd[1225]: cali0ae6a97716e: Gained carrier Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.554 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0 calico-apiserver-6c6cc8d58d- calico-apiserver cd6dbe24-c430-428d-92d9-91f581859d83 1093 0 2026-01-17 00:16:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6cc8d58d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-cccb0c3e85 calico-apiserver-6c6cc8d58d-g2rj5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ae6a97716e [] [] }} ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.554 [INFO][4846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.591 [INFO][4857] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" HandleID="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.593 [INFO][4857] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" HandleID="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-cccb0c3e85", "pod":"calico-apiserver-6c6cc8d58d-g2rj5", "timestamp":"2026-01-17 00:16:46.591616178 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cccb0c3e85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.594 [INFO][4857] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.594 [INFO][4857] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.594 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cccb0c3e85' Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.604 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.611 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.628 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.630 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.633 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.634 [INFO][4857] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.639 [INFO][4857] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.644 [INFO][4857] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.654 [INFO][4857] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.136/26] block=192.168.19.128/26 handle="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.655 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.136/26] handle="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" host="ci-4081.3.6-n-cccb0c3e85" Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.655 [INFO][4857] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:46.722758 containerd[1599]: 2026-01-17 00:16:46.655 [INFO][4857] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.136/26] IPv6=[] ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" HandleID="k8s-pod-network.f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.725076 containerd[1599]: 2026-01-17 00:16:46.659 [INFO][4846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6dbe24-c430-428d-92d9-91f581859d83", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"", Pod:"calico-apiserver-6c6cc8d58d-g2rj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ae6a97716e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:46.725076 containerd[1599]: 2026-01-17 00:16:46.659 [INFO][4846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.136/32] ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.725076 containerd[1599]: 2026-01-17 00:16:46.659 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ae6a97716e ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.725076 containerd[1599]: 2026-01-17 00:16:46.672 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.725076 containerd[1599]: 2026-01-17 00:16:46.681 [INFO][4846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6dbe24-c430-428d-92d9-91f581859d83", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e", Pod:"calico-apiserver-6c6cc8d58d-g2rj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ae6a97716e", MAC:"2a:b2:ac:2f:00:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:46.725076 containerd[1599]: 2026-01-17 00:16:46.707 [INFO][4846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e" Namespace="calico-apiserver" Pod="calico-apiserver-6c6cc8d58d-g2rj5" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:46.787241 containerd[1599]: time="2026-01-17T00:16:46.786185678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:46.787241 containerd[1599]: time="2026-01-17T00:16:46.786270980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:46.787241 containerd[1599]: time="2026-01-17T00:16:46.786319669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:46.787241 containerd[1599]: time="2026-01-17T00:16:46.786488855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:46.862895 kubelet[2699]: E0117 00:16:46.862503 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:46.871868 kubelet[2699]: E0117 00:16:46.871623 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:16:46.876561 kubelet[2699]: E0117 00:16:46.874999 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:46.876561 kubelet[2699]: E0117 00:16:46.875264 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:16:46.876940 systemd-networkd[1225]: cali1d57617aa9d: Gained IPv6LL Jan 17 00:16:46.986176 containerd[1599]: time="2026-01-17T00:16:46.986021854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6cc8d58d-g2rj5,Uid:cd6dbe24-c430-428d-92d9-91f581859d83,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e\"" Jan 17 00:16:46.991014 containerd[1599]: time="2026-01-17T00:16:46.990687341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:47.354444 containerd[1599]: time="2026-01-17T00:16:47.354181269Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:47.355315 containerd[1599]: time="2026-01-17T00:16:47.355239298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:47.355444 containerd[1599]: time="2026-01-17T00:16:47.355360166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:47.355728 kubelet[2699]: E0117 00:16:47.355666 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:47.355793 kubelet[2699]: E0117 00:16:47.355748 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:47.356298 kubelet[2699]: E0117 00:16:47.356000 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6cc8d58d-g2rj5_calico-apiserver(cd6dbe24-c430-428d-92d9-91f581859d83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:47.357752 kubelet[2699]: E0117 00:16:47.357659 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:16:47.580599 systemd-networkd[1225]: cali87fc98b91e1: Gained IPv6LL Jan 17 00:16:47.772978 systemd-networkd[1225]: cali0ae6a97716e: Gained IPv6LL Jan 17 00:16:47.869387 kubelet[2699]: E0117 00:16:47.869316 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:16:47.870554 kubelet[2699]: E0117 00:16:47.870525 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:47.871248 kubelet[2699]: E0117 00:16:47.871201 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:16:48.874572 kubelet[2699]: E0117 00:16:48.874522 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:16:55.374951 containerd[1599]: time="2026-01-17T00:16:55.374891275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:16:55.393033 containerd[1599]: time="2026-01-17T00:16:55.392863653Z" level=info msg="StopPodSandbox for \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\"" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.470 [WARNING][4937] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0c84ef6-254a-45d6-83f8-3efb7d2d1036", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10", Pod:"coredns-668d6bf9bc-59mdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1550eb82cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.471 [INFO][4937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.471 [INFO][4937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" iface="eth0" netns="" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.471 [INFO][4937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.472 [INFO][4937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.516 [INFO][4944] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.516 [INFO][4944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.516 [INFO][4944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.527 [WARNING][4944] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.527 [INFO][4944] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.530 [INFO][4944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.534901 containerd[1599]: 2026-01-17 00:16:55.532 [INFO][4937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.535503 containerd[1599]: time="2026-01-17T00:16:55.534949718Z" level=info msg="TearDown network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\" successfully" Jan 17 00:16:55.535503 containerd[1599]: time="2026-01-17T00:16:55.534979786Z" level=info msg="StopPodSandbox for \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\" returns successfully" Jan 17 00:16:55.535719 containerd[1599]: time="2026-01-17T00:16:55.535676663Z" level=info msg="RemovePodSandbox for \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\"" Jan 17 00:16:55.538442 containerd[1599]: time="2026-01-17T00:16:55.538382358Z" level=info msg="Forcibly stopping sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\"" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.588 [WARNING][4958] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0c84ef6-254a-45d6-83f8-3efb7d2d1036", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"0fa098d25b59346f7c4cae914860cb58cd2d12b70dc229d4f45aa1cf2fd1be10", Pod:"coredns-668d6bf9bc-59mdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1550eb82cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.588 [INFO][4958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.588 [INFO][4958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" iface="eth0" netns="" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.589 [INFO][4958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.589 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.622 [INFO][4965] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.622 [INFO][4965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.622 [INFO][4965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.631 [WARNING][4965] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.631 [INFO][4965] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" HandleID="k8s-pod-network.874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--59mdc-eth0" Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.633 [INFO][4965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.639985 containerd[1599]: 2026-01-17 00:16:55.636 [INFO][4958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359" Jan 17 00:16:55.639985 containerd[1599]: time="2026-01-17T00:16:55.639249493Z" level=info msg="TearDown network for sandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\" successfully" Jan 17 00:16:55.656819 containerd[1599]: time="2026-01-17T00:16:55.656456768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:55.656819 containerd[1599]: time="2026-01-17T00:16:55.656626865Z" level=info msg="RemovePodSandbox \"874ee1adcfaa2ab5a55ee071d79d9d79891ad425846e25d26614fb55edc05359\" returns successfully" Jan 17 00:16:55.658918 containerd[1599]: time="2026-01-17T00:16:55.658369293Z" level=info msg="StopPodSandbox for \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\"" Jan 17 00:16:55.707769 containerd[1599]: time="2026-01-17T00:16:55.707712128Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:55.708895 containerd[1599]: time="2026-01-17T00:16:55.708820661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:16:55.709264 kubelet[2699]: E0117 00:16:55.709187 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:55.709264 kubelet[2699]: E0117 00:16:55.709242 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:55.712549 kubelet[2699]: E0117 00:16:55.709460 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:55.713123 containerd[1599]: time="2026-01-17T00:16:55.709902049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:16:55.714749 containerd[1599]: time="2026-01-17T00:16:55.714607399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.719 [WARNING][4979] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ec675aa1-75e7-4358-af19-bc10fabdfd85", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8", Pod:"coredns-668d6bf9bc-5nql5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d57617aa9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.722 [INFO][4979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.723 [INFO][4979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" iface="eth0" netns="" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.723 [INFO][4979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.723 [INFO][4979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.757 [INFO][4987] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.757 [INFO][4987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.757 [INFO][4987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.768 [WARNING][4987] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.768 [INFO][4987] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.771 [INFO][4987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.777353 containerd[1599]: 2026-01-17 00:16:55.774 [INFO][4979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.778790 containerd[1599]: time="2026-01-17T00:16:55.777714467Z" level=info msg="TearDown network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\" successfully" Jan 17 00:16:55.778790 containerd[1599]: time="2026-01-17T00:16:55.777931670Z" level=info msg="StopPodSandbox for \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\" returns successfully" Jan 17 00:16:55.779868 containerd[1599]: time="2026-01-17T00:16:55.779793712Z" level=info msg="RemovePodSandbox for \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\"" Jan 17 00:16:55.780128 containerd[1599]: time="2026-01-17T00:16:55.779952702Z" level=info msg="Forcibly stopping sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\"" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.840 [WARNING][5001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ec675aa1-75e7-4358-af19-bc10fabdfd85", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"ea916cc107304eba230b143cc8329e298030bfc7fc38a6f319b23e54377388c8", Pod:"coredns-668d6bf9bc-5nql5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d57617aa9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.841 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.841 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" iface="eth0" netns="" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.841 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.841 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.883 [INFO][5009] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.883 [INFO][5009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.883 [INFO][5009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.893 [WARNING][5009] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.893 [INFO][5009] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" HandleID="k8s-pod-network.5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-coredns--668d6bf9bc--5nql5-eth0" Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.897 [INFO][5009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.904591 containerd[1599]: 2026-01-17 00:16:55.902 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492" Jan 17 00:16:55.904591 containerd[1599]: time="2026-01-17T00:16:55.904509815Z" level=info msg="TearDown network for sandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\" successfully" Jan 17 00:16:55.909889 containerd[1599]: time="2026-01-17T00:16:55.909597296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:55.909889 containerd[1599]: time="2026-01-17T00:16:55.909720367Z" level=info msg="RemovePodSandbox \"5ee26740073050415d2647153f06a68f377fc692e01e6fc92d862cb91aca1492\" returns successfully" Jan 17 00:16:55.910976 containerd[1599]: time="2026-01-17T00:16:55.910423180Z" level=info msg="StopPodSandbox for \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\"" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:55.972 [WARNING][5023] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0", GenerateName:"calico-kube-controllers-7d4ffb8bcd-", Namespace:"calico-system", SelfLink:"", UID:"7b8b1bac-c0de-45cb-b647-eb4712722238", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4ffb8bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99", Pod:"calico-kube-controllers-7d4ffb8bcd-m826d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23597c14d1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:55.973 [INFO][5023] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:55.973 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" iface="eth0" netns="" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:55.973 [INFO][5023] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:55.973 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:56.013 [INFO][5030] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:56.013 [INFO][5030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:56.013 [INFO][5030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:56.027 [WARNING][5030] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:56.028 [INFO][5030] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:56.033 [INFO][5030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.040243 containerd[1599]: 2026-01-17 00:16:56.036 [INFO][5023] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.041136 containerd[1599]: time="2026-01-17T00:16:56.041053416Z" level=info msg="TearDown network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\" successfully" Jan 17 00:16:56.041175 containerd[1599]: time="2026-01-17T00:16:56.041138001Z" level=info msg="StopPodSandbox for \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\" returns successfully" Jan 17 00:16:56.042561 containerd[1599]: time="2026-01-17T00:16:56.042093617Z" level=info msg="RemovePodSandbox for \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\"" Jan 17 00:16:56.042561 containerd[1599]: time="2026-01-17T00:16:56.042148667Z" level=info msg="Forcibly stopping sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\"" Jan 17 00:16:56.082842 containerd[1599]: time="2026-01-17T00:16:56.082761134Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:56.085131 containerd[1599]: time="2026-01-17T00:16:56.085059958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:16:56.085772 containerd[1599]: time="2026-01-17T00:16:56.085491144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:16:56.086423 kubelet[2699]: E0117 00:16:56.085974 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:56.086423 kubelet[2699]: E0117 00:16:56.086034 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:56.086423 kubelet[2699]: E0117 00:16:56.086282 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2324e28db44d456388a17c04446e2b47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xczdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8677d57b99-wp5xq_calico-system(2f056ee9-6914-4575-b585-f333a8c77da9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:56.087226 containerd[1599]: time="2026-01-17T00:16:56.086635712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.135 [WARNING][5044] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0", GenerateName:"calico-kube-controllers-7d4ffb8bcd-", Namespace:"calico-system", SelfLink:"", UID:"7b8b1bac-c0de-45cb-b647-eb4712722238", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4ffb8bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"0b82ad596733d00537a3989b2477e761578b4f360d1270bac38d8ef685524a99", Pod:"calico-kube-controllers-7d4ffb8bcd-m826d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23597c14d1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.135 [INFO][5044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.135 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" iface="eth0" netns="" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.135 [INFO][5044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.135 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.169 [INFO][5051] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.170 [INFO][5051] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.170 [INFO][5051] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.180 [WARNING][5051] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.180 [INFO][5051] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" HandleID="k8s-pod-network.0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--kube--controllers--7d4ffb8bcd--m826d-eth0" Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.183 [INFO][5051] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.191997 containerd[1599]: 2026-01-17 00:16:56.186 [INFO][5044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2" Jan 17 00:16:56.191997 containerd[1599]: time="2026-01-17T00:16:56.190003874Z" level=info msg="TearDown network for sandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\" successfully" Jan 17 00:16:56.195764 containerd[1599]: time="2026-01-17T00:16:56.195700151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:56.196078 containerd[1599]: time="2026-01-17T00:16:56.196048609Z" level=info msg="RemovePodSandbox \"0a4ceb6fdac1bb1bb644f2db27ba3010779f539f16d691cf7373dc4e9e647fe2\" returns successfully" Jan 17 00:16:56.196917 containerd[1599]: time="2026-01-17T00:16:56.196881112Z" level=info msg="StopPodSandbox for \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\"" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.260 [WARNING][5065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43a11e4d-d5b2-4905-990b-145b7f453524", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32", Pod:"calico-apiserver-6c6cc8d58d-8tc5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87fc98b91e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.261 [INFO][5065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.261 [INFO][5065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" iface="eth0" netns="" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.261 [INFO][5065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.261 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.294 [INFO][5072] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.294 [INFO][5072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.294 [INFO][5072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.303 [WARNING][5072] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.303 [INFO][5072] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.306 [INFO][5072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.312145 containerd[1599]: 2026-01-17 00:16:56.309 [INFO][5065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.313007 containerd[1599]: time="2026-01-17T00:16:56.312271570Z" level=info msg="TearDown network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\" successfully" Jan 17 00:16:56.313007 containerd[1599]: time="2026-01-17T00:16:56.312332744Z" level=info msg="StopPodSandbox for \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\" returns successfully" Jan 17 00:16:56.313348 containerd[1599]: time="2026-01-17T00:16:56.313280088Z" level=info msg="RemovePodSandbox for \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\"" Jan 17 00:16:56.313400 containerd[1599]: time="2026-01-17T00:16:56.313357232Z" level=info msg="Forcibly stopping sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\"" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.390 [WARNING][5086] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43a11e4d-d5b2-4905-990b-145b7f453524", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"79b9fe0cff620d79ae3240c41abc3d3fd1ee3d2b3d4fcd66458826c92e8b2f32", Pod:"calico-apiserver-6c6cc8d58d-8tc5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87fc98b91e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.390 [INFO][5086] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.390 [INFO][5086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" iface="eth0" netns="" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.390 [INFO][5086] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.390 [INFO][5086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.426 [INFO][5093] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.427 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.427 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.437 [WARNING][5093] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.437 [INFO][5093] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" HandleID="k8s-pod-network.eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--8tc5j-eth0" Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.439 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.445077 containerd[1599]: 2026-01-17 00:16:56.442 [INFO][5086] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4" Jan 17 00:16:56.445881 containerd[1599]: time="2026-01-17T00:16:56.445075073Z" level=info msg="TearDown network for sandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\" successfully" Jan 17 00:16:56.449375 containerd[1599]: time="2026-01-17T00:16:56.449317116Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:56.451524 containerd[1599]: time="2026-01-17T00:16:56.451428691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:16:56.452857 containerd[1599]: time="2026-01-17T00:16:56.451768008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:16:56.452857 containerd[1599]: time="2026-01-17T00:16:56.452430535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:16:56.453069 kubelet[2699]: E0117 00:16:56.451957 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:56.453069 kubelet[2699]: E0117 00:16:56.452022 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:56.453069 kubelet[2699]: E0117 00:16:56.452353 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:56.454115 containerd[1599]: time="2026-01-17T00:16:56.454070620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:56.454428 containerd[1599]: time="2026-01-17T00:16:56.454404744Z" level=info msg="RemovePodSandbox \"eb7785d29d86ef0e1f15b03acdc234749748c13cd2a805f5e6f913fa28a740a4\" returns successfully" Jan 17 00:16:56.454527 kubelet[2699]: E0117 00:16:56.454268 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:16:56.457930 containerd[1599]: time="2026-01-17T00:16:56.457600498Z" level=info msg="StopPodSandbox for \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\"" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.532 [WARNING][5107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6dbe24-c430-428d-92d9-91f581859d83", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e", Pod:"calico-apiserver-6c6cc8d58d-g2rj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ae6a97716e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.532 [INFO][5107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.532 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" iface="eth0" netns="" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.532 [INFO][5107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.532 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.573 [INFO][5115] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.573 [INFO][5115] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.573 [INFO][5115] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.582 [WARNING][5115] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.582 [INFO][5115] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.585 [INFO][5115] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.591379 containerd[1599]: 2026-01-17 00:16:56.588 [INFO][5107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.592299 containerd[1599]: time="2026-01-17T00:16:56.591439129Z" level=info msg="TearDown network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\" successfully" Jan 17 00:16:56.592299 containerd[1599]: time="2026-01-17T00:16:56.591478464Z" level=info msg="StopPodSandbox for \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\" returns successfully" Jan 17 00:16:56.592299 containerd[1599]: time="2026-01-17T00:16:56.592172195Z" level=info msg="RemovePodSandbox for \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\"" Jan 17 00:16:56.592299 containerd[1599]: time="2026-01-17T00:16:56.592211207Z" level=info msg="Forcibly stopping sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\"" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.643 [WARNING][5129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0", GenerateName:"calico-apiserver-6c6cc8d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd6dbe24-c430-428d-92d9-91f581859d83", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6cc8d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"f47679ac564b7889978b4ddbadf43a18077500929ac5aa1b7aeff6da6d11c78e", Pod:"calico-apiserver-6c6cc8d58d-g2rj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ae6a97716e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.647 [INFO][5129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.647 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" iface="eth0" netns="" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.647 [INFO][5129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.648 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.698 [INFO][5137] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.698 [INFO][5137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.698 [INFO][5137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.711 [WARNING][5137] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.711 [INFO][5137] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" HandleID="k8s-pod-network.da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-calico--apiserver--6c6cc8d58d--g2rj5-eth0" Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.715 [INFO][5137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.721201 containerd[1599]: 2026-01-17 00:16:56.718 [INFO][5129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514" Jan 17 00:16:56.721957 containerd[1599]: time="2026-01-17T00:16:56.721359043Z" level=info msg="TearDown network for sandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\" successfully" Jan 17 00:16:56.725514 containerd[1599]: time="2026-01-17T00:16:56.725451188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:56.725650 containerd[1599]: time="2026-01-17T00:16:56.725559775Z" level=info msg="RemovePodSandbox \"da3493979a13853cab9dc52e4f451f7b91630d5f13c9f455371f55c06ed1f514\" returns successfully" Jan 17 00:16:56.726319 containerd[1599]: time="2026-01-17T00:16:56.726292891Z" level=info msg="StopPodSandbox for \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\"" Jan 17 00:16:56.826282 containerd[1599]: time="2026-01-17T00:16:56.825114601Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:56.826644 containerd[1599]: time="2026-01-17T00:16:56.826197478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:16:56.826909 containerd[1599]: time="2026-01-17T00:16:56.826857780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:56.827967 kubelet[2699]: E0117 00:16:56.827131 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:56.827967 kubelet[2699]: E0117 00:16:56.827213 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:56.827967 kubelet[2699]: E0117 00:16:56.827397 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xczdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8677d57b99-wp5xq_calico-system(2f056ee9-6914-4575-b585-f333a8c77da9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:56.830113 kubelet[2699]: E0117 00:16:56.830009 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.782 [WARNING][5151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"96b65c17-4b2e-4680-86fb-3425314d6580", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4", Pod:"goldmane-666569f655-cd6lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9d29ec41b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.783 [INFO][5151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.783 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" iface="eth0" netns="" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.783 [INFO][5151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.783 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.821 [INFO][5158] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.822 [INFO][5158] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.822 [INFO][5158] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.831 [WARNING][5158] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.831 [INFO][5158] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.834 [INFO][5158] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.842070 containerd[1599]: 2026-01-17 00:16:56.837 [INFO][5151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:56.843022 containerd[1599]: time="2026-01-17T00:16:56.842116399Z" level=info msg="TearDown network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\" successfully" Jan 17 00:16:56.843022 containerd[1599]: time="2026-01-17T00:16:56.842155661Z" level=info msg="StopPodSandbox for \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\" returns successfully" Jan 17 00:16:56.844999 containerd[1599]: time="2026-01-17T00:16:56.844946854Z" level=info msg="RemovePodSandbox for \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\"" Jan 17 00:16:56.844999 containerd[1599]: time="2026-01-17T00:16:56.844998430Z" level=info msg="Forcibly stopping sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\"" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.931 [WARNING][5175] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"96b65c17-4b2e-4680-86fb-3425314d6580", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"304a6a2df12e1f631e07b17059aca4238e35635cfff68f33c62d694dc2c7b9c4", Pod:"goldmane-666569f655-cd6lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9d29ec41b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.933 [INFO][5175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.934 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" iface="eth0" netns="" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.934 [INFO][5175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.934 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.990 [INFO][5183] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.991 [INFO][5183] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:56.991 [INFO][5183] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:57.015 [WARNING][5183] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:57.015 [INFO][5183] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" HandleID="k8s-pod-network.85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-goldmane--666569f655--cd6lg-eth0" Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:57.019 [INFO][5183] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:57.025984 containerd[1599]: 2026-01-17 00:16:57.022 [INFO][5175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2" Jan 17 00:16:57.027403 containerd[1599]: time="2026-01-17T00:16:57.026623119Z" level=info msg="TearDown network for sandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\" successfully" Jan 17 00:16:57.032337 containerd[1599]: time="2026-01-17T00:16:57.031608969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:57.032988 containerd[1599]: time="2026-01-17T00:16:57.032294726Z" level=info msg="RemovePodSandbox \"85388892f2ba1cc6bf5acf625659d7609cfcda97f6a35340f3729b265b2b2cc2\" returns successfully" Jan 17 00:16:57.033515 containerd[1599]: time="2026-01-17T00:16:57.033467848Z" level=info msg="StopPodSandbox for \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\"" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.096 [WARNING][5199] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.096 [INFO][5199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.096 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" iface="eth0" netns="" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.096 [INFO][5199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.096 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.139 [INFO][5209] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.139 [INFO][5209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.139 [INFO][5209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.148 [WARNING][5209] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.148 [INFO][5209] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.151 [INFO][5209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:57.156485 containerd[1599]: 2026-01-17 00:16:57.154 [INFO][5199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.157718 containerd[1599]: time="2026-01-17T00:16:57.156546569Z" level=info msg="TearDown network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\" successfully" Jan 17 00:16:57.157718 containerd[1599]: time="2026-01-17T00:16:57.156574728Z" level=info msg="StopPodSandbox for \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\" returns successfully" Jan 17 00:16:57.157718 containerd[1599]: time="2026-01-17T00:16:57.157159161Z" level=info msg="RemovePodSandbox for \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\"" Jan 17 00:16:57.157718 containerd[1599]: time="2026-01-17T00:16:57.157191372Z" level=info msg="Forcibly stopping sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\"" Jan 17 00:16:57.200206 systemd[1]: Started sshd@7-159.223.199.43:22-4.153.228.146:55128.service - OpenSSH per-connection server daemon (4.153.228.146:55128). Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.236 [WARNING][5223] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" WorkloadEndpoint="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.236 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.236 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" iface="eth0" netns="" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.236 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.236 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.285 [INFO][5232] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.285 [INFO][5232] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.285 [INFO][5232] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.296 [WARNING][5232] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.296 [INFO][5232] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" HandleID="k8s-pod-network.44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-whisker--f9cc75987--n26vh-eth0" Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.301 [INFO][5232] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:57.308054 containerd[1599]: 2026-01-17 00:16:57.304 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a" Jan 17 00:16:57.308054 containerd[1599]: time="2026-01-17T00:16:57.307655312Z" level=info msg="TearDown network for sandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\" successfully" Jan 17 00:16:57.314886 containerd[1599]: time="2026-01-17T00:16:57.314795548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:57.315070 containerd[1599]: time="2026-01-17T00:16:57.314944468Z" level=info msg="RemovePodSandbox \"44c40af6021add2f41e7d46c37be8efa0655f47fd3552407626b57e3ad80d38a\" returns successfully" Jan 17 00:16:57.316168 containerd[1599]: time="2026-01-17T00:16:57.315695793Z" level=info msg="StopPodSandbox for \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\"" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.382 [WARNING][5248] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe4a7e29-720a-4e34-a53e-e9187d031f57", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8", Pod:"csi-node-driver-pvltb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e58505b48a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.382 [INFO][5248] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.382 [INFO][5248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" iface="eth0" netns="" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.382 [INFO][5248] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.382 [INFO][5248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.424 [INFO][5255] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.425 [INFO][5255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.425 [INFO][5255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.435 [WARNING][5255] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.436 [INFO][5255] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.438 [INFO][5255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:57.445151 containerd[1599]: 2026-01-17 00:16:57.442 [INFO][5248] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.447372 containerd[1599]: time="2026-01-17T00:16:57.446074395Z" level=info msg="TearDown network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\" successfully" Jan 17 00:16:57.447372 containerd[1599]: time="2026-01-17T00:16:57.446133524Z" level=info msg="StopPodSandbox for \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\" returns successfully" Jan 17 00:16:57.447372 containerd[1599]: time="2026-01-17T00:16:57.446772868Z" level=info msg="RemovePodSandbox for \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\"" Jan 17 00:16:57.447372 containerd[1599]: time="2026-01-17T00:16:57.446817921Z" level=info msg="Forcibly stopping sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\"" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.498 [WARNING][5269] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe4a7e29-720a-4e34-a53e-e9187d031f57", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cccb0c3e85", ContainerID:"58cfca405f0772dbeefdc124d2e47f9ab7386664fed6ebfbfc75c21677f5e8e8", Pod:"csi-node-driver-pvltb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e58505b48a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.499 [INFO][5269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.499 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" iface="eth0" netns="" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.499 [INFO][5269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.499 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.539 [INFO][5276] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.539 [INFO][5276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.539 [INFO][5276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.551 [WARNING][5276] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.551 [INFO][5276] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" HandleID="k8s-pod-network.6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Workload="ci--4081.3.6--n--cccb0c3e85-k8s-csi--node--driver--pvltb-eth0" Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.554 [INFO][5276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:57.564202 containerd[1599]: 2026-01-17 00:16:57.558 [INFO][5269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726" Jan 17 00:16:57.564202 containerd[1599]: time="2026-01-17T00:16:57.564072962Z" level=info msg="TearDown network for sandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\" successfully" Jan 17 00:16:57.567910 containerd[1599]: time="2026-01-17T00:16:57.567794948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:57.568076 containerd[1599]: time="2026-01-17T00:16:57.567928803Z" level=info msg="RemovePodSandbox \"6c7b49a3b6873a5c2a781435e450844dac3b27cb3a124a3d2ccd28e2a2422726\" returns successfully" Jan 17 00:16:57.690186 sshd[5228]: Accepted publickey for core from 4.153.228.146 port 55128 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:16:57.693354 sshd[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:57.700407 systemd-logind[1564]: New session 8 of user core. Jan 17 00:16:57.715143 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:16:58.511932 sshd[5228]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:58.522823 systemd[1]: sshd@7-159.223.199.43:22-4.153.228.146:55128.service: Deactivated successfully. Jan 17 00:16:58.526040 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:16:58.528337 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:16:58.531977 systemd-logind[1564]: Removed session 8. Jan 17 00:17:00.386372 containerd[1599]: time="2026-01-17T00:17:00.386298646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:17:00.749471 containerd[1599]: time="2026-01-17T00:17:00.749392759Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:00.750502 containerd[1599]: time="2026-01-17T00:17:00.750433868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:17:00.750670 containerd[1599]: time="2026-01-17T00:17:00.750564601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:00.751036 kubelet[2699]: E0117 00:17:00.750968 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:00.751682 kubelet[2699]: E0117 00:17:00.751044 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:00.751682 kubelet[2699]: E0117 00:17:00.751255 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9cvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cd6lg_calico-system(96b65c17-4b2e-4680-86fb-3425314d6580): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:00.753040 kubelet[2699]: E0117 00:17:00.752961 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:17:01.372867 containerd[1599]: time="2026-01-17T00:17:01.370541998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:01.721644 containerd[1599]: time="2026-01-17T00:17:01.721502823Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:01.722802 containerd[1599]: time="2026-01-17T00:17:01.722724419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:01.723008 containerd[1599]: time="2026-01-17T00:17:01.722774816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:01.723206 kubelet[2699]: E0117 00:17:01.723143 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:01.723691 kubelet[2699]: E0117 00:17:01.723221 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:01.723691 kubelet[2699]: E0117 00:17:01.723519 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzb95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6cc8d58d-8tc5j_calico-apiserver(43a11e4d-d5b2-4905-990b-145b7f453524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:01.724457 containerd[1599]: time="2026-01-17T00:17:01.724401598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:17:01.725441 kubelet[2699]: E0117 00:17:01.725002 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:17:02.060503 containerd[1599]: time="2026-01-17T00:17:02.060258873Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:02.063376 containerd[1599]: time="2026-01-17T00:17:02.062664073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:17:02.063376 containerd[1599]: time="2026-01-17T00:17:02.063017478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:02.063660 kubelet[2699]: E0117 00:17:02.063255 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:02.063660 kubelet[2699]: E0117 00:17:02.063317 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:02.063660 kubelet[2699]: E0117 00:17:02.063491 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xccxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d4ffb8bcd-m826d_calico-system(7b8b1bac-c0de-45cb-b647-eb4712722238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:02.065091 kubelet[2699]: E0117 00:17:02.065003 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:17:02.545712 systemd[1]: Started sshd@8-159.223.199.43:22-3.138.190.72:47080.service - OpenSSH per-connection server daemon (3.138.190.72:47080). Jan 17 00:17:03.425780 sshd[5301]: Connection closed by 3.138.190.72 port 47080 Jan 17 00:17:03.430883 systemd[1]: sshd@8-159.223.199.43:22-3.138.190.72:47080.service: Deactivated successfully. Jan 17 00:17:03.478293 kubelet[2699]: I0117 00:17:03.478053 2699 ???:1] "http: TLS handshake error from 3.138.190.72:49334: EOF" Jan 17 00:17:03.587520 systemd[1]: Started sshd@9-159.223.199.43:22-4.153.228.146:55134.service - OpenSSH per-connection server daemon (4.153.228.146:55134). Jan 17 00:17:04.051497 sshd[5307]: Accepted publickey for core from 4.153.228.146 port 55134 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:04.054973 sshd[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:04.066639 systemd-logind[1564]: New session 9 of user core. Jan 17 00:17:04.078480 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:17:04.374402 containerd[1599]: time="2026-01-17T00:17:04.373968435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:04.516741 sshd[5307]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:04.521689 systemd[1]: sshd@9-159.223.199.43:22-4.153.228.146:55134.service: Deactivated successfully. Jan 17 00:17:04.530632 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:17:04.533640 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:17:04.535650 systemd-logind[1564]: Removed session 9. Jan 17 00:17:04.746896 containerd[1599]: time="2026-01-17T00:17:04.746670268Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:04.748491 containerd[1599]: time="2026-01-17T00:17:04.748344704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:04.748491 containerd[1599]: time="2026-01-17T00:17:04.748431879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:04.749308 kubelet[2699]: E0117 00:17:04.748755 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:04.749308 kubelet[2699]: E0117 00:17:04.748856 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:04.749308 kubelet[2699]: E0117 00:17:04.749059 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6cc8d58d-g2rj5_calico-apiserver(cd6dbe24-c430-428d-92d9-91f581859d83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:04.750845 kubelet[2699]: E0117 00:17:04.750644 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:17:07.370060 kubelet[2699]: E0117 00:17:07.369912 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:17:09.584238 systemd[1]: Started sshd@10-159.223.199.43:22-4.153.228.146:51406.service - OpenSSH per-connection server daemon (4.153.228.146:51406). Jan 17 00:17:09.840197 kubelet[2699]: E0117 00:17:09.839216 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:09.991325 sshd[5324]: Accepted publickey for core from 4.153.228.146 port 51406 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:09.993554 sshd[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:10.001115 systemd-logind[1564]: New session 10 of user core. Jan 17 00:17:10.011343 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:17:10.371435 kubelet[2699]: E0117 00:17:10.371331 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:17:10.385488 sshd[5324]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:10.398751 systemd[1]: sshd@10-159.223.199.43:22-4.153.228.146:51406.service: Deactivated successfully. Jan 17 00:17:10.404884 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:17:10.405136 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:17:10.407817 systemd-logind[1564]: Removed session 10. Jan 17 00:17:10.461278 systemd[1]: Started sshd@11-159.223.199.43:22-4.153.228.146:51418.service - OpenSSH per-connection server daemon (4.153.228.146:51418). Jan 17 00:17:10.886996 sshd[5361]: Accepted publickey for core from 4.153.228.146 port 51418 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:10.894790 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:10.908755 systemd-logind[1564]: New session 11 of user core. Jan 17 00:17:10.916459 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:17:11.318575 sshd[5361]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:11.322415 systemd[1]: sshd@11-159.223.199.43:22-4.153.228.146:51418.service: Deactivated successfully. Jan 17 00:17:11.329374 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:17:11.330283 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:17:11.331631 systemd-logind[1564]: Removed session 11. Jan 17 00:17:11.383193 systemd[1]: Started sshd@12-159.223.199.43:22-4.153.228.146:51422.service - OpenSSH per-connection server daemon (4.153.228.146:51422). Jan 17 00:17:11.769530 sshd[5372]: Accepted publickey for core from 4.153.228.146 port 51422 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:11.771723 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:11.777876 systemd-logind[1564]: New session 12 of user core. Jan 17 00:17:11.783361 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:17:12.142196 sshd[5372]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:12.146125 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:17:12.146620 systemd[1]: sshd@12-159.223.199.43:22-4.153.228.146:51422.service: Deactivated successfully. Jan 17 00:17:12.153916 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:17:12.156042 systemd-logind[1564]: Removed session 12. Jan 17 00:17:13.372915 kubelet[2699]: E0117 00:17:13.372807 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:17:14.367550 kubelet[2699]: E0117 00:17:14.366919 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:17:16.368479 kubelet[2699]: E0117 00:17:16.367945 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:17:16.368479 kubelet[2699]: E0117 00:17:16.367991 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:17:17.224207 systemd[1]: Started sshd@13-159.223.199.43:22-4.153.228.146:36814.service - OpenSSH per-connection server daemon (4.153.228.146:36814). Jan 17 00:17:17.651020 sshd[5389]: Accepted publickey for core from 4.153.228.146 port 36814 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:17.652694 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:17.660546 systemd-logind[1564]: New session 13 of user core. Jan 17 00:17:17.668855 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:17:18.075284 sshd[5389]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:18.084462 systemd[1]: sshd@13-159.223.199.43:22-4.153.228.146:36814.service: Deactivated successfully. Jan 17 00:17:18.096363 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:17:18.099016 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:17:18.101890 systemd-logind[1564]: Removed session 13. Jan 17 00:17:18.373477 containerd[1599]: time="2026-01-17T00:17:18.373061584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:17:18.713324 containerd[1599]: time="2026-01-17T00:17:18.713055277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:18.714555 containerd[1599]: time="2026-01-17T00:17:18.714394464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:17:18.714555 containerd[1599]: time="2026-01-17T00:17:18.714482644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:17:18.714917 kubelet[2699]: E0117 00:17:18.714858 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:18.715354 kubelet[2699]: E0117 00:17:18.714931 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:18.715354 kubelet[2699]: E0117 00:17:18.715101 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2324e28db44d456388a17c04446e2b47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xczdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8677d57b99-wp5xq_calico-system(2f056ee9-6914-4575-b585-f333a8c77da9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:18.717818 containerd[1599]: time="2026-01-17T00:17:18.717699325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:17:19.042541 containerd[1599]: time="2026-01-17T00:17:19.042376999Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:19.043889 containerd[1599]: time="2026-01-17T00:17:19.043674463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:17:19.046026 containerd[1599]: time="2026-01-17T00:17:19.045975353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:19.046244 kubelet[2699]: E0117 00:17:19.046197 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:19.046367 kubelet[2699]: E0117 00:17:19.046262 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:19.052714 kubelet[2699]: E0117 00:17:19.046382 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xczdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8677d57b99-wp5xq_calico-system(2f056ee9-6914-4575-b585-f333a8c77da9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:19.054790 kubelet[2699]: E0117 00:17:19.054544 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:17:20.372779 kubelet[2699]: E0117 00:17:20.372741 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:23.153330 systemd[1]: Started sshd@14-159.223.199.43:22-4.153.228.146:36828.service - OpenSSH per-connection server daemon (4.153.228.146:36828). Jan 17 00:17:23.636530 sshd[5409]: Accepted publickey for core from 4.153.228.146 port 36828 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:23.638082 sshd[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:23.647747 systemd-logind[1564]: New session 14 of user core. Jan 17 00:17:23.651253 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:17:24.086175 sshd[5409]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:24.095416 systemd[1]: sshd@14-159.223.199.43:22-4.153.228.146:36828.service: Deactivated successfully. Jan 17 00:17:24.103257 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:17:24.103952 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:17:24.110683 systemd-logind[1564]: Removed session 14. Jan 17 00:17:24.370275 containerd[1599]: time="2026-01-17T00:17:24.370114545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:17:24.697205 containerd[1599]: time="2026-01-17T00:17:24.696781351Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:24.699075 containerd[1599]: time="2026-01-17T00:17:24.699012624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:17:24.699287 containerd[1599]: time="2026-01-17T00:17:24.699127709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:17:24.699375 kubelet[2699]: E0117 00:17:24.699334 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:24.700155 kubelet[2699]: E0117 00:17:24.699392 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:24.700155 kubelet[2699]: E0117 00:17:24.699525 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:24.708884 containerd[1599]: time="2026-01-17T00:17:24.708798241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:17:25.023343 containerd[1599]: time="2026-01-17T00:17:25.023264729Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:25.024749 containerd[1599]: time="2026-01-17T00:17:25.024568391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:17:25.024749 containerd[1599]: time="2026-01-17T00:17:25.024670054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:17:25.025050 kubelet[2699]: E0117 00:17:25.024979 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:25.025135 kubelet[2699]: E0117 00:17:25.025065 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:25.026602 kubelet[2699]: E0117 00:17:25.025252 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pvltb_calico-system(fe4a7e29-720a-4e34-a53e-e9187d031f57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:25.026602 kubelet[2699]: E0117 00:17:25.026500 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:17:25.370201 kubelet[2699]: E0117 00:17:25.369427 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:25.371069 kubelet[2699]: E0117 00:17:25.371045 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:27.365614 kubelet[2699]: E0117 00:17:27.365208 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:27.375240 containerd[1599]: time="2026-01-17T00:17:27.374765388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:17:27.711029 containerd[1599]: time="2026-01-17T00:17:27.710949963Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:27.712300 containerd[1599]: time="2026-01-17T00:17:27.712183479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:17:27.712300 containerd[1599]: time="2026-01-17T00:17:27.712237256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:27.712553 kubelet[2699]: E0117 00:17:27.712503 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:27.712618 kubelet[2699]: E0117 00:17:27.712566 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:27.713892 kubelet[2699]: E0117 00:17:27.712714 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xccxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d4ffb8bcd-m826d_calico-system(7b8b1bac-c0de-45cb-b647-eb4712722238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:27.714246 kubelet[2699]: E0117 00:17:27.714192 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:17:28.371041 containerd[1599]: time="2026-01-17T00:17:28.369717119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:28.735170 containerd[1599]: time="2026-01-17T00:17:28.734992081Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:28.739257 containerd[1599]: time="2026-01-17T00:17:28.738621738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:28.739257 containerd[1599]: time="2026-01-17T00:17:28.738808080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:28.746160 kubelet[2699]: E0117 00:17:28.743357 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:28.746160 kubelet[2699]: E0117 00:17:28.743441 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:28.746160 kubelet[2699]: E0117 00:17:28.743766 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6cc8d58d-g2rj5_calico-apiserver(cd6dbe24-c430-428d-92d9-91f581859d83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:28.753700 kubelet[2699]: E0117 00:17:28.753358 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:17:28.755915 containerd[1599]: time="2026-01-17T00:17:28.754931730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:29.069576 containerd[1599]: time="2026-01-17T00:17:29.069127386Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:29.070413 containerd[1599]: time="2026-01-17T00:17:29.070175302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:29.070413 containerd[1599]: time="2026-01-17T00:17:29.070228621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:29.071027 kubelet[2699]: E0117 00:17:29.070959 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:29.071027 kubelet[2699]: E0117 00:17:29.071022 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:29.072859 kubelet[2699]: E0117 00:17:29.071299 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzb95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6cc8d58d-8tc5j_calico-apiserver(43a11e4d-d5b2-4905-990b-145b7f453524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:29.073092 containerd[1599]: time="2026-01-17T00:17:29.072795458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:17:29.073281 kubelet[2699]: E0117 00:17:29.073206 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:17:29.170266 systemd[1]: Started sshd@15-159.223.199.43:22-4.153.228.146:39456.service - OpenSSH per-connection server daemon (4.153.228.146:39456). Jan 17 00:17:29.372796 kubelet[2699]: E0117 00:17:29.372469 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:17:29.391070 containerd[1599]: time="2026-01-17T00:17:29.391015474Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:29.392034 containerd[1599]: time="2026-01-17T00:17:29.391988341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:17:29.392180 containerd[1599]: time="2026-01-17T00:17:29.392085694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:29.392985 kubelet[2699]: E0117 00:17:29.392948 2699 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:29.393050 kubelet[2699]: E0117 00:17:29.392999 2699 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:29.393244 kubelet[2699]: E0117 00:17:29.393125 2699 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9cvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cd6lg_calico-system(96b65c17-4b2e-4680-86fb-3425314d6580): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:29.394527 kubelet[2699]: E0117 00:17:29.394438 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:17:29.624207 sshd[5425]: Accepted publickey for core from 4.153.228.146 port 39456 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:29.625937 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:29.632549 systemd-logind[1564]: New session 15 of user core. Jan 17 00:17:29.640209 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:17:30.039125 sshd[5425]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:30.044124 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:17:30.045408 systemd[1]: sshd@15-159.223.199.43:22-4.153.228.146:39456.service: Deactivated successfully. Jan 17 00:17:30.054500 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:17:30.057257 systemd-logind[1564]: Removed session 15. Jan 17 00:17:35.102225 systemd[1]: Started sshd@16-159.223.199.43:22-4.153.228.146:52132.service - OpenSSH per-connection server daemon (4.153.228.146:52132). Jan 17 00:17:35.497422 sshd[5441]: Accepted publickey for core from 4.153.228.146 port 52132 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:35.499601 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:35.506144 systemd-logind[1564]: New session 16 of user core. Jan 17 00:17:35.512261 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:17:35.879218 sshd[5441]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:35.885862 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:17:35.886278 systemd[1]: sshd@16-159.223.199.43:22-4.153.228.146:52132.service: Deactivated successfully. Jan 17 00:17:35.889929 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:17:35.891305 systemd-logind[1564]: Removed session 16. Jan 17 00:17:35.946231 systemd[1]: Started sshd@17-159.223.199.43:22-4.153.228.146:52142.service - OpenSSH per-connection server daemon (4.153.228.146:52142). Jan 17 00:17:36.334402 sshd[5455]: Accepted publickey for core from 4.153.228.146 port 52142 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:36.336495 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:36.344035 systemd-logind[1564]: New session 17 of user core. Jan 17 00:17:36.352330 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:17:36.370784 kubelet[2699]: E0117 00:17:36.370100 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:17:36.941554 sshd[5455]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:36.946799 systemd[1]: sshd@17-159.223.199.43:22-4.153.228.146:52142.service: Deactivated successfully. Jan 17 00:17:36.952518 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:17:36.952822 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:17:36.954734 systemd-logind[1564]: Removed session 17. Jan 17 00:17:37.014487 systemd[1]: Started sshd@18-159.223.199.43:22-4.153.228.146:52150.service - OpenSSH per-connection server daemon (4.153.228.146:52150). Jan 17 00:17:37.433519 sshd[5467]: Accepted publickey for core from 4.153.228.146 port 52150 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:37.435584 sshd[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:37.441614 systemd-logind[1564]: New session 18 of user core. Jan 17 00:17:37.450569 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:17:38.334725 sshd[5467]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:38.344553 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:17:38.344615 systemd[1]: sshd@18-159.223.199.43:22-4.153.228.146:52150.service: Deactivated successfully. Jan 17 00:17:38.347359 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:17:38.353444 systemd-logind[1564]: Removed session 18. Jan 17 00:17:38.402250 systemd[1]: Started sshd@19-159.223.199.43:22-4.153.228.146:52160.service - OpenSSH per-connection server daemon (4.153.228.146:52160). Jan 17 00:17:38.841501 sshd[5486]: Accepted publickey for core from 4.153.228.146 port 52160 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:38.844666 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:38.850260 systemd-logind[1564]: New session 19 of user core. Jan 17 00:17:38.858260 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:17:39.499224 sshd[5486]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:39.503235 systemd[1]: sshd@19-159.223.199.43:22-4.153.228.146:52160.service: Deactivated successfully. Jan 17 00:17:39.504898 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:17:39.509499 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:17:39.511757 systemd-logind[1564]: Removed session 19. Jan 17 00:17:39.570229 systemd[1]: Started sshd@20-159.223.199.43:22-4.153.228.146:52170.service - OpenSSH per-connection server daemon (4.153.228.146:52170). Jan 17 00:17:40.001710 sshd[5497]: Accepted publickey for core from 4.153.228.146 port 52170 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:40.004140 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:40.011145 systemd-logind[1564]: New session 20 of user core. Jan 17 00:17:40.018380 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:17:40.368171 kubelet[2699]: E0117 00:17:40.367244 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:17:40.414186 sshd[5497]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:40.422354 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:17:40.424818 systemd[1]: sshd@20-159.223.199.43:22-4.153.228.146:52170.service: Deactivated successfully. Jan 17 00:17:40.431802 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:17:40.433644 systemd-logind[1564]: Removed session 20. Jan 17 00:17:41.371094 kubelet[2699]: E0117 00:17:41.371043 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:17:42.365992 kubelet[2699]: E0117 00:17:42.365805 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:17:43.368168 kubelet[2699]: E0117 00:17:43.367791 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:17:43.370525 kubelet[2699]: E0117 00:17:43.370443 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:17:45.504311 systemd[1]: Started sshd@21-159.223.199.43:22-4.153.228.146:44388.service - OpenSSH per-connection server daemon (4.153.228.146:44388). Jan 17 00:17:45.968374 sshd[5533]: Accepted publickey for core from 4.153.228.146 port 44388 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:45.970547 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:45.976365 systemd-logind[1564]: New session 21 of user core. Jan 17 00:17:45.984322 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:17:46.423947 sshd[5533]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:46.432115 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:17:46.432567 systemd[1]: sshd@21-159.223.199.43:22-4.153.228.146:44388.service: Deactivated successfully. Jan 17 00:17:46.441681 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:17:46.446964 systemd-logind[1564]: Removed session 21. Jan 17 00:17:48.368104 kubelet[2699]: E0117 00:17:48.368006 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvltb" podUID="fe4a7e29-720a-4e34-a53e-e9187d031f57" Jan 17 00:17:49.367118 kubelet[2699]: E0117 00:17:49.365413 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:51.366490 kubelet[2699]: E0117 00:17:51.366410 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4ffb8bcd-m826d" podUID="7b8b1bac-c0de-45cb-b647-eb4712722238" Jan 17 00:17:51.487210 systemd[1]: Started sshd@22-159.223.199.43:22-4.153.228.146:44390.service - OpenSSH per-connection server daemon (4.153.228.146:44390). Jan 17 00:17:51.888913 sshd[5547]: Accepted publickey for core from 4.153.228.146 port 44390 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:51.896095 sshd[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:51.910436 systemd-logind[1564]: New session 22 of user core. Jan 17 00:17:51.916958 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:17:52.345010 sshd[5547]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:52.355272 systemd[1]: sshd@22-159.223.199.43:22-4.153.228.146:44390.service: Deactivated successfully. Jan 17 00:17:52.368116 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:17:52.370260 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:17:52.375152 systemd-logind[1564]: Removed session 22. Jan 17 00:17:53.369030 kubelet[2699]: E0117 00:17:53.368916 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-g2rj5" podUID="cd6dbe24-c430-428d-92d9-91f581859d83" Jan 17 00:17:55.374283 kubelet[2699]: E0117 00:17:55.374232 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cd6lg" podUID="96b65c17-4b2e-4680-86fb-3425314d6580" Jan 17 00:17:55.379204 kubelet[2699]: E0117 00:17:55.379147 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8677d57b99-wp5xq" podUID="2f056ee9-6914-4575-b585-f333a8c77da9" Jan 17 00:17:55.379489 kubelet[2699]: E0117 00:17:55.379392 2699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6cc8d58d-8tc5j" podUID="43a11e4d-d5b2-4905-990b-145b7f453524" Jan 17 00:17:57.423193 systemd[1]: Started sshd@23-159.223.199.43:22-4.153.228.146:47192.service - OpenSSH per-connection server daemon (4.153.228.146:47192). Jan 17 00:17:57.857176 sshd[5563]: Accepted publickey for core from 4.153.228.146 port 47192 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:57.862436 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:57.875379 systemd-logind[1564]: New session 23 of user core. Jan 17 00:17:57.881531 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:17:58.467326 sshd[5563]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:58.480105 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:17:58.484269 systemd[1]: sshd@23-159.223.199.43:22-4.153.228.146:47192.service: Deactivated successfully. Jan 17 00:17:58.494888 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:17:58.497060 systemd-logind[1564]: Removed session 23. Jan 17 00:17:59.365359 kubelet[2699]: E0117 00:17:59.365052 2699 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"