Jan 17 00:19:12.014064 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:19:12.014092 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:12.014105 kernel: BIOS-provided physical RAM map: Jan 17 00:19:12.014112 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:19:12.014118 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:19:12.014125 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:19:12.014132 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 00:19:12.014139 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 00:19:12.014145 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:19:12.014153 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:19:12.014160 kernel: NX (Execute Disable) protection: active Jan 17 00:19:12.014166 kernel: APIC: Static calls initialized Jan 17 00:19:12.014177 kernel: SMBIOS 2.8 present. Jan 17 00:19:12.014185 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 00:19:12.014193 kernel: Hypervisor detected: KVM Jan 17 00:19:12.014204 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:19:12.014215 kernel: kvm-clock: using sched offset of 3580209746 cycles Jan 17 00:19:12.014223 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:19:12.014230 kernel: tsc: Detected 2000.000 MHz processor Jan 17 00:19:12.014237 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:19:12.014244 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:19:12.014275 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 00:19:12.014282 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:19:12.014290 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:19:12.014299 kernel: ACPI: Early table checksum verification disabled Jan 17 00:19:12.014306 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 00:19:12.014313 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.014320 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.014327 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.014334 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 00:19:12.014341 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.014350 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.014357 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.014366 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.014373 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 17 00:19:12.014380 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 17 00:19:12.014387 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 00:19:12.014393 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 17 00:19:12.014400 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 17 00:19:12.014407 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 17 00:19:12.014420 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 17 00:19:12.014428 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:19:12.014435 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:19:12.014442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:19:12.014449 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 00:19:12.014462 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 00:19:12.014469 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 00:19:12.014479 kernel: Zone ranges: Jan 17 00:19:12.014487 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:19:12.014494 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 00:19:12.014501 kernel: Normal empty Jan 17 00:19:12.014508 kernel: Movable zone start for each node Jan 17 00:19:12.014515 kernel: Early memory node ranges Jan 17 00:19:12.014523 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:19:12.014530 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 00:19:12.014537 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 00:19:12.014547 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:19:12.014554 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:19:12.014565 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 00:19:12.014572 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:19:12.014580 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:19:12.014587 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:19:12.014594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:19:12.014601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:19:12.014608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:19:12.014618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:19:12.014626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:19:12.014633 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:19:12.014640 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:19:12.014647 kernel: TSC deadline timer available Jan 17 00:19:12.014655 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:19:12.014662 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:19:12.014669 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 00:19:12.014680 kernel: Booting paravirtualized kernel on KVM Jan 17 00:19:12.014688 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:19:12.014698 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:19:12.014706 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:19:12.014713 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:19:12.014720 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:19:12.014730 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:19:12.014743 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:12.014754 kernel: random: crng init done Jan 17 00:19:12.014765 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:19:12.014781 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:19:12.014792 kernel: Fallback order for Node 0: 0 Jan 17 00:19:12.014803 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 00:19:12.014813 kernel: Policy zone: DMA32 Jan 17 00:19:12.014824 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:19:12.014835 kernel: Memory: 1971208K/2096612K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 125144K reserved, 0K cma-reserved) Jan 17 00:19:12.014846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:19:12.014856 kernel: Kernel/User page tables isolation: enabled Jan 17 00:19:12.014867 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:19:12.014916 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:19:12.014927 kernel: Dynamic Preempt: voluntary Jan 17 00:19:12.014938 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:19:12.014951 kernel: rcu: RCU event tracing is enabled. Jan 17 00:19:12.014961 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:19:12.014972 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:19:12.014983 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:19:12.014993 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:19:12.015004 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:19:12.015018 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:19:12.015029 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:19:12.015039 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:19:12.015050 kernel: Console: colour VGA+ 80x25 Jan 17 00:19:12.015066 kernel: printk: console [tty0] enabled Jan 17 00:19:12.015077 kernel: printk: console [ttyS0] enabled Jan 17 00:19:12.015087 kernel: ACPI: Core revision 20230628 Jan 17 00:19:12.015098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:19:12.015108 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:19:12.015122 kernel: x2apic enabled Jan 17 00:19:12.015133 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:19:12.015144 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:19:12.015155 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 17 00:19:12.015165 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 17 00:19:12.015176 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 00:19:12.015187 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 00:19:12.015198 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:19:12.015221 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:19:12.015232 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:19:12.015244 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 00:19:12.015272 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:19:12.015284 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:19:12.015292 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:19:12.015299 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:19:12.015307 kernel: active return thunk: its_return_thunk Jan 17 00:19:12.015319 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:19:12.015331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:19:12.015339 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:19:12.015347 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:19:12.015355 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:19:12.015364 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:19:12.015371 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:19:12.015379 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:19:12.015387 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:19:12.015398 kernel: landlock: Up and running. Jan 17 00:19:12.015406 kernel: SELinux: Initializing. Jan 17 00:19:12.015414 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.015423 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.015431 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 00:19:12.015439 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:12.015447 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:12.015456 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:12.015465 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 00:19:12.015475 kernel: signal: max sigframe size: 1776 Jan 17 00:19:12.015483 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:19:12.015491 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:19:12.015499 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:19:12.015507 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:19:12.015515 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:19:12.015522 kernel: .... node #0, CPUs: #1 Jan 17 00:19:12.015530 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:19:12.015543 kernel: smpboot: Max logical packages: 1 Jan 17 00:19:12.015553 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 17 00:19:12.015561 kernel: devtmpfs: initialized Jan 17 00:19:12.015569 kernel: x86/mm: Memory block size: 128MB Jan 17 00:19:12.015577 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:19:12.015585 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.015593 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:19:12.015601 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:19:12.015609 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:19:12.015617 kernel: audit: type=2000 audit(1768609150.227:1): state=initialized audit_enabled=0 res=1 Jan 17 00:19:12.015628 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:19:12.015636 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:19:12.015644 kernel: cpuidle: using governor menu Jan 17 00:19:12.015652 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:19:12.015660 kernel: dca service started, version 1.12.1 Jan 17 00:19:12.015668 kernel: PCI: Using configuration type 1 for base access Jan 17 00:19:12.015676 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:19:12.015684 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:19:12.015692 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:19:12.015703 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:19:12.015711 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:19:12.015719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:19:12.015727 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:19:12.015735 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:19:12.015743 kernel: ACPI: Interpreter enabled Jan 17 00:19:12.015751 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:19:12.015759 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:19:12.015767 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:19:12.015778 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:19:12.015786 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:19:12.015794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:19:12.016028 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:19:12.016146 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:19:12.016245 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:19:12.019966 kernel: acpiphp: Slot [3] registered Jan 17 00:19:12.019998 kernel: acpiphp: Slot [4] registered Jan 17 00:19:12.020011 kernel: acpiphp: Slot [5] registered Jan 17 00:19:12.020022 kernel: acpiphp: Slot [6] registered Jan 17 00:19:12.020035 kernel: acpiphp: Slot [7] registered Jan 17 00:19:12.020047 kernel: acpiphp: Slot [8] registered Jan 17 00:19:12.020060 kernel: acpiphp: Slot [9] registered Jan 17 00:19:12.020071 kernel: acpiphp: Slot [10] registered Jan 17 00:19:12.020086 kernel: acpiphp: Slot [11] registered Jan 17 00:19:12.020099 kernel: acpiphp: Slot [12] registered Jan 17 00:19:12.020111 kernel: acpiphp: Slot [13] registered Jan 17 00:19:12.020123 kernel: acpiphp: Slot [14] registered Jan 17 00:19:12.020131 kernel: acpiphp: Slot [15] registered Jan 17 00:19:12.020139 kernel: acpiphp: Slot [16] registered Jan 17 00:19:12.020147 kernel: acpiphp: Slot [17] registered Jan 17 00:19:12.020156 kernel: acpiphp: Slot [18] registered Jan 17 00:19:12.020164 kernel: acpiphp: Slot [19] registered Jan 17 00:19:12.020172 kernel: acpiphp: Slot [20] registered Jan 17 00:19:12.020180 kernel: acpiphp: Slot [21] registered Jan 17 00:19:12.020188 kernel: acpiphp: Slot [22] registered Jan 17 00:19:12.020198 kernel: acpiphp: Slot [23] registered Jan 17 00:19:12.020206 kernel: acpiphp: Slot [24] registered Jan 17 00:19:12.020214 kernel: acpiphp: Slot [25] registered Jan 17 00:19:12.020222 kernel: acpiphp: Slot [26] registered Jan 17 00:19:12.020230 kernel: acpiphp: Slot [27] registered Jan 17 00:19:12.020238 kernel: acpiphp: Slot [28] registered Jan 17 00:19:12.020246 kernel: acpiphp: Slot [29] registered Jan 17 00:19:12.020285 kernel: acpiphp: Slot [30] registered Jan 17 00:19:12.020293 kernel: acpiphp: Slot [31] registered Jan 17 00:19:12.020301 kernel: PCI host bridge to bus 0000:00 Jan 17 00:19:12.020518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:19:12.020610 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:19:12.020700 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:19:12.020786 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:19:12.020871 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 00:19:12.020958 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:19:12.021085 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:19:12.021206 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:19:12.021426 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 00:19:12.021568 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 00:19:12.021668 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 00:19:12.021763 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 00:19:12.021858 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 00:19:12.021963 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 00:19:12.022074 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 00:19:12.022193 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 00:19:12.023298 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:19:12.023428 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 00:19:12.023529 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 00:19:12.023651 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:19:12.023750 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 00:19:12.023849 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 00:19:12.023945 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 00:19:12.024041 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 00:19:12.024137 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:19:12.024702 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:19:12.024869 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 00:19:12.024972 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 00:19:12.025069 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 00:19:12.025178 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:19:12.028006 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 00:19:12.028132 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 00:19:12.028229 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 00:19:12.028398 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 00:19:12.028495 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 00:19:12.028591 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 00:19:12.028688 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 00:19:12.028800 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:19:12.028898 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 00:19:12.029061 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 00:19:12.029176 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 00:19:12.029870 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:19:12.029988 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 00:19:12.030088 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 00:19:12.030185 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 00:19:12.030402 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 00:19:12.030507 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 00:19:12.030612 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 00:19:12.030623 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:19:12.030632 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:19:12.030640 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:19:12.030648 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:19:12.030656 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:19:12.030665 kernel: iommu: Default domain type: Translated Jan 17 00:19:12.030676 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:19:12.030684 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:19:12.030693 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:19:12.030701 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:19:12.030709 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 00:19:12.030810 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 00:19:12.030909 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 00:19:12.031007 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:19:12.031021 kernel: vgaarb: loaded Jan 17 00:19:12.031030 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:19:12.031038 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:19:12.031046 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:19:12.031055 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:19:12.031063 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:19:12.031071 kernel: pnp: PnP ACPI init Jan 17 00:19:12.031080 kernel: pnp: PnP ACPI: found 4 devices Jan 17 00:19:12.031088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:19:12.031096 kernel: NET: Registered PF_INET protocol family Jan 17 00:19:12.031108 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:19:12.031116 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:19:12.031124 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:19:12.031133 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:19:12.031141 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:19:12.031150 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:19:12.031158 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.031166 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.031177 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:19:12.031186 kernel: NET: Registered PF_XDP protocol family Jan 17 00:19:12.032367 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:19:12.032473 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:19:12.032562 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:19:12.032649 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:19:12.032737 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 00:19:12.032842 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 00:19:12.032950 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:19:12.032982 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:19:12.033133 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 36207 usecs Jan 17 00:19:12.033151 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:19:12.033166 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:19:12.033176 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 17 00:19:12.033185 kernel: Initialise system trusted keyrings Jan 17 00:19:12.033193 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:19:12.033202 kernel: Key type asymmetric registered Jan 17 00:19:12.033215 kernel: Asymmetric key parser 'x509' registered Jan 17 00:19:12.033224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:19:12.033232 kernel: io scheduler mq-deadline registered Jan 17 00:19:12.033241 kernel: io scheduler kyber registered Jan 17 00:19:12.033249 kernel: io scheduler bfq registered Jan 17 00:19:12.035369 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:19:12.035396 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 00:19:12.035417 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:19:12.035436 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:19:12.035453 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:19:12.035461 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:19:12.035470 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:19:12.035478 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:19:12.035486 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:19:12.035495 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:19:12.035666 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:19:12.035765 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:19:12.035863 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:19:11 UTC (1768609151) Jan 17 00:19:12.035954 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 00:19:12.035965 kernel: intel_pstate: CPU model not supported Jan 17 00:19:12.035973 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:19:12.035982 kernel: Segment Routing with IPv6 Jan 17 00:19:12.035990 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:19:12.035998 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:19:12.036006 kernel: Key type dns_resolver registered Jan 17 00:19:12.036015 kernel: IPI shorthand broadcast: enabled Jan 17 00:19:12.036026 kernel: sched_clock: Marking stable (1246006021, 230836796)->(1544481105, -67638288) Jan 17 00:19:12.036034 kernel: registered taskstats version 1 Jan 17 00:19:12.036042 kernel: Loading compiled-in X.509 certificates Jan 17 00:19:12.036051 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:19:12.036059 kernel: Key type .fscrypt registered Jan 17 00:19:12.036068 kernel: Key type fscrypt-provisioning registered Jan 17 00:19:12.036076 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:19:12.036084 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:19:12.036092 kernel: ima: No architecture policies found Jan 17 00:19:12.036103 kernel: clk: Disabling unused clocks Jan 17 00:19:12.036111 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:19:12.036119 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:19:12.036128 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:19:12.036153 kernel: Run /init as init process Jan 17 00:19:12.036164 kernel: with arguments: Jan 17 00:19:12.036173 kernel: /init Jan 17 00:19:12.036181 kernel: with environment: Jan 17 00:19:12.036189 kernel: HOME=/ Jan 17 00:19:12.036200 kernel: TERM=linux Jan 17 00:19:12.036211 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:19:12.036223 systemd[1]: Detected virtualization kvm. Jan 17 00:19:12.036233 systemd[1]: Detected architecture x86-64. Jan 17 00:19:12.036241 systemd[1]: Running in initrd. Jan 17 00:19:12.036301 systemd[1]: No hostname configured, using default hostname. Jan 17 00:19:12.036311 systemd[1]: Hostname set to . Jan 17 00:19:12.036323 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:19:12.036332 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:19:12.036341 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:12.036350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:12.036360 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:19:12.036369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:19:12.036378 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:19:12.036387 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:19:12.036401 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:19:12.036410 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:19:12.036420 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:12.036429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:12.036438 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:19:12.036446 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:19:12.036456 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:19:12.036467 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:19:12.036476 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:19:12.036485 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:19:12.036494 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:19:12.036503 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:19:12.036512 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:12.036524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:12.036533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:12.036542 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:19:12.036552 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:19:12.036561 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:19:12.036569 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:19:12.036578 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:19:12.036587 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:19:12.036599 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:19:12.036652 systemd-journald[185]: Collecting audit messages is disabled. Jan 17 00:19:12.036695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:12.036711 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:19:12.036728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:12.036737 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:19:12.036748 systemd-journald[185]: Journal started Jan 17 00:19:12.036771 systemd-journald[185]: Runtime Journal (/run/log/journal/6709f8c738c74f3386d1544774dbc90b) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:19:12.038068 systemd-modules-load[186]: Inserted module 'overlay' Jan 17 00:19:12.132709 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:19:12.132757 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:19:12.132778 kernel: Bridge firewalling registered Jan 17 00:19:12.073295 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 17 00:19:12.135345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:12.137632 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:12.144585 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:12.160654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:19:12.166704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:19:12.173024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:19:12.179823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:12.183240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:19:12.193576 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:19:12.195754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:12.207555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:19:12.210768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:12.221627 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:19:12.223741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:12.226949 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:19:12.229919 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:12.273987 systemd-resolved[223]: Positive Trust Anchors: Jan 17 00:19:12.274004 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:19:12.274040 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:19:12.277512 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 17 00:19:12.278954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:19:12.281474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:12.338333 kernel: SCSI subsystem initialized Jan 17 00:19:12.350299 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:19:12.364310 kernel: iscsi: registered transport (tcp) Jan 17 00:19:12.389613 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:19:12.389691 kernel: QLogic iSCSI HBA Driver Jan 17 00:19:12.442816 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:19:12.452536 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:19:12.483398 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:19:12.483478 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:19:12.487015 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:19:12.534317 kernel: raid6: avx2x4 gen() 30376 MB/s Jan 17 00:19:12.552302 kernel: raid6: avx2x2 gen() 26141 MB/s Jan 17 00:19:12.572576 kernel: raid6: avx2x1 gen() 21457 MB/s Jan 17 00:19:12.572694 kernel: raid6: using algorithm avx2x4 gen() 30376 MB/s Jan 17 00:19:12.592318 kernel: raid6: .... xor() 9195 MB/s, rmw enabled Jan 17 00:19:12.592430 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:19:12.618323 kernel: xor: automatically using best checksumming function avx Jan 17 00:19:12.789727 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:19:12.804591 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:19:12.813573 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:12.829396 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 17 00:19:12.834445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:12.847183 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:19:12.868936 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 17 00:19:12.912077 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:19:12.920580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:19:12.985912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:12.992616 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:19:13.025902 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:19:13.029235 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:19:13.031620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:13.032630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:19:13.040754 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:19:13.067962 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:19:13.079974 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 00:19:13.084306 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:19:13.089948 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 00:19:13.107293 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:19:13.127724 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:19:13.127798 kernel: GPT:9289727 != 125829119 Jan 17 00:19:13.127810 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:19:13.127821 kernel: GPT:9289727 != 125829119 Jan 17 00:19:13.127831 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:19:13.127843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:13.141520 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:19:13.141584 kernel: AES CTR mode by8 optimization enabled Jan 17 00:19:13.155291 kernel: libata version 3.00 loaded. Jan 17 00:19:13.161334 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 00:19:13.167284 kernel: scsi host1: ata_piix Jan 17 00:19:13.172320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:19:13.195285 kernel: scsi host2: ata_piix Jan 17 00:19:13.195517 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 00:19:13.195532 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 00:19:13.195543 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 00:19:13.195671 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 17 00:19:13.172443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:13.194756 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:13.210674 kernel: ACPI: bus type USB registered Jan 17 00:19:13.210708 kernel: usbcore: registered new interface driver usbfs Jan 17 00:19:13.210720 kernel: usbcore: registered new interface driver hub Jan 17 00:19:13.210743 kernel: usbcore: registered new device driver usb Jan 17 00:19:13.196740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:13.196930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:13.200819 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:13.212718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:13.313497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:13.319563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:13.361380 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:13.393933 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (456) Jan 17 00:19:13.394016 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (461) Jan 17 00:19:13.407549 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:19:13.418939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:19:13.431893 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 00:19:13.432206 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 00:19:13.432427 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 00:19:13.432617 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 00:19:13.432801 kernel: hub 1-0:1.0: USB hub found Jan 17 00:19:13.433033 kernel: hub 1-0:1.0: 2 ports detected Jan 17 00:19:13.440727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:19:13.441800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:19:13.447631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:19:13.454537 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:19:13.462642 disk-uuid[553]: Primary Header is updated. Jan 17 00:19:13.462642 disk-uuid[553]: Secondary Entries is updated. Jan 17 00:19:13.462642 disk-uuid[553]: Secondary Header is updated. Jan 17 00:19:13.469404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:13.473339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:14.479626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:14.479715 disk-uuid[554]: The operation has completed successfully. Jan 17 00:19:14.530133 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:19:14.531395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:19:14.543549 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:19:14.556693 sh[565]: Success Jan 17 00:19:14.575449 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:19:14.636862 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:19:14.645924 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:19:14.649761 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:19:14.683977 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:19:14.684127 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:14.684149 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:19:14.687534 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:19:14.689790 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:19:14.699530 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:19:14.701041 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:19:14.706642 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:19:14.708499 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:19:14.728324 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:14.728408 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:14.730536 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:19:14.737387 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:19:14.754018 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:19:14.757130 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:14.767311 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:19:14.777621 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:19:14.874401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:19:14.885548 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:19:14.914752 systemd-networkd[748]: lo: Link UP Jan 17 00:19:14.914763 systemd-networkd[748]: lo: Gained carrier Jan 17 00:19:14.919068 systemd-networkd[748]: Enumeration completed Jan 17 00:19:14.920421 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:19:14.922005 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:19:14.922009 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 00:19:14.923324 systemd[1]: Reached target network.target - Network. Jan 17 00:19:14.926377 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:14.926382 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:14.927203 systemd-networkd[748]: eth0: Link UP Jan 17 00:19:14.927207 systemd-networkd[748]: eth0: Gained carrier Jan 17 00:19:14.927219 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:19:14.935566 systemd-networkd[748]: eth1: Link UP Jan 17 00:19:14.935573 systemd-networkd[748]: eth1: Gained carrier Jan 17 00:19:14.935590 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:14.943514 ignition[666]: Ignition 2.19.0 Jan 17 00:19:14.943523 ignition[666]: Stage: fetch-offline Jan 17 00:19:14.947154 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:19:14.943566 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:14.943576 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:14.943738 ignition[666]: parsed url from cmdline: "" Jan 17 00:19:14.943744 ignition[666]: no config URL provided Jan 17 00:19:14.943752 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:19:14.943765 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:19:14.943774 ignition[666]: failed to fetch config: resource requires networking Jan 17 00:19:14.944199 ignition[666]: Ignition finished successfully Jan 17 00:19:14.954448 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.46/20 acquired from 169.254.169.253 Jan 17 00:19:14.955578 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:19:14.958378 systemd-networkd[748]: eth0: DHCPv4 address 165.232.147.124/20, gateway 165.232.144.1 acquired from 169.254.169.253 Jan 17 00:19:14.985491 ignition[756]: Ignition 2.19.0 Jan 17 00:19:14.985506 ignition[756]: Stage: fetch Jan 17 00:19:14.985723 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:14.985741 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:14.985867 ignition[756]: parsed url from cmdline: "" Jan 17 00:19:14.985872 ignition[756]: no config URL provided Jan 17 00:19:14.985878 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:19:14.985890 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:19:14.985911 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 00:19:15.001472 ignition[756]: GET result: OK Jan 17 00:19:15.001739 ignition[756]: parsing config with SHA512: c01edd69538ab8e5cdee492388bfaa76205bd00ccb7d24ac3eba9852a2e8f0fe83f6fa8d39c9e157ab2fd51e37aed09a9849b0db504317e787511944857fbae6 Jan 17 00:19:15.008523 unknown[756]: fetched base config from "system" Jan 17 00:19:15.008532 unknown[756]: fetched base config from "system" Jan 17 00:19:15.008540 unknown[756]: fetched user config from "digitalocean" Jan 17 00:19:15.012776 ignition[756]: fetch: fetch complete Jan 17 00:19:15.012798 ignition[756]: fetch: fetch passed Jan 17 00:19:15.012942 ignition[756]: Ignition finished successfully Jan 17 00:19:15.014995 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:19:15.028610 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:19:15.056245 ignition[763]: Ignition 2.19.0 Jan 17 00:19:15.057644 ignition[763]: Stage: kargs Jan 17 00:19:15.057984 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.058003 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.059446 ignition[763]: kargs: kargs passed Jan 17 00:19:15.061689 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:19:15.059516 ignition[763]: Ignition finished successfully Jan 17 00:19:15.069712 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:19:15.096712 ignition[770]: Ignition 2.19.0 Jan 17 00:19:15.096726 ignition[770]: Stage: disks Jan 17 00:19:15.099785 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:19:15.096908 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.096920 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.101762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:19:15.098092 ignition[770]: disks: disks passed Jan 17 00:19:15.110730 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:19:15.098171 ignition[770]: Ignition finished successfully Jan 17 00:19:15.112420 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:19:15.114350 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:19:15.115943 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:19:15.123664 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:19:15.142115 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:19:15.147083 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:19:15.153541 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:19:15.270319 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:19:15.270094 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:19:15.271695 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:15.286492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:15.289547 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:19:15.295577 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 00:19:15.299150 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:19:15.303441 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:19:15.303508 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:15.314297 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Jan 17 00:19:15.321330 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:15.321419 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:15.321456 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:19:15.321123 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:19:15.345785 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:19:15.350415 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:19:15.351394 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:15.422632 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:19:15.425439 coreos-metadata[788]: Jan 17 00:19:15.425 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:15.434393 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:19:15.436406 coreos-metadata[789]: Jan 17 00:19:15.436 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:15.441096 coreos-metadata[788]: Jan 17 00:19:15.440 INFO Fetch successful Jan 17 00:19:15.443386 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:19:15.450536 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 00:19:15.450700 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 00:19:15.458416 coreos-metadata[789]: Jan 17 00:19:15.453 INFO Fetch successful Jan 17 00:19:15.459676 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:19:15.463335 coreos-metadata[789]: Jan 17 00:19:15.462 INFO wrote hostname ci-4081.3.6-n-8d0945b27f to /sysroot/etc/hostname Jan 17 00:19:15.464702 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:15.573480 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:15.585731 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:19:15.591554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:19:15.600315 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:15.637337 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:19:15.646839 ignition[909]: INFO : Ignition 2.19.0 Jan 17 00:19:15.649433 ignition[909]: INFO : Stage: mount Jan 17 00:19:15.649433 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.649433 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.651967 ignition[909]: INFO : mount: mount passed Jan 17 00:19:15.651967 ignition[909]: INFO : Ignition finished successfully Jan 17 00:19:15.653776 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:19:15.660527 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:19:15.679703 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:19:15.699652 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:15.711952 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (920) Jan 17 00:19:15.712031 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:15.715386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:15.719593 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:19:15.724300 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:19:15.727213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:15.761902 ignition[937]: INFO : Ignition 2.19.0 Jan 17 00:19:15.761902 ignition[937]: INFO : Stage: files Jan 17 00:19:15.763702 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.763702 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.765796 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:19:15.765796 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:19:15.765796 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:19:15.769470 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:19:15.770780 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:19:15.771978 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:19:15.771851 unknown[937]: wrote ssh authorized keys file for user: core Jan 17 00:19:15.774363 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:15.774363 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:19:15.810385 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:19:15.861718 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:15.861718 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:19:15.864611 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:19:16.255078 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:19:16.311767 systemd-networkd[748]: eth0: Gained IPv6LL Jan 17 00:19:16.649705 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:19:16.649705 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:19:16.652785 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:16.652785 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:16.652785 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:19:16.652785 ignition[937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:16.652785 ignition[937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:16.652785 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:16.652785 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:16.652785 ignition[937]: INFO : files: files passed Jan 17 00:19:16.652785 ignition[937]: INFO : Ignition finished successfully Jan 17 00:19:16.653008 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:19:16.663595 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:19:16.667850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:19:16.674385 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:19:16.675336 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:19:16.694043 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:16.694043 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:16.696666 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:16.697895 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:16.699372 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:19:16.715118 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:19:16.753934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:19:16.754100 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:19:16.756206 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:19:16.757905 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:19:16.759678 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:19:16.765612 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:19:16.783844 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:16.792575 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:19:16.819490 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:16.821508 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:16.823476 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:19:16.824203 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:19:16.824366 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:16.825950 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:19:16.826980 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:19:16.828823 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:19:16.830645 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:16.832348 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:19:16.834011 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:19:16.835661 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:19:16.837730 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:19:16.839488 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:19:16.841585 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:19:16.843350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:19:16.843587 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:19:16.845215 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:16.846243 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:16.847647 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:19:16.847915 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:16.849131 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:19:16.849389 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:19:16.851358 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:19:16.851508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:16.852439 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:19:16.852539 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:19:16.854022 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:19:16.854137 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:16.862565 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:19:16.869574 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:19:16.874763 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:19:16.875035 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:16.877093 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:19:16.877389 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:19:16.888233 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:19:16.888636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:19:16.890553 systemd-networkd[748]: eth1: Gained IPv6LL Jan 17 00:19:16.904988 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:19:16.921004 ignition[989]: INFO : Ignition 2.19.0 Jan 17 00:19:16.921004 ignition[989]: INFO : Stage: umount Jan 17 00:19:16.921004 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:16.921004 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:16.941062 ignition[989]: INFO : umount: umount passed Jan 17 00:19:16.941062 ignition[989]: INFO : Ignition finished successfully Jan 17 00:19:16.925081 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:19:16.925326 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:19:16.938599 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:19:16.938721 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:19:16.942646 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:19:16.942759 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:19:16.944361 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:19:16.944461 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:19:16.959718 systemd[1]: Stopped target network.target - Network. Jan 17 00:19:16.969571 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:19:16.969709 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:19:16.970805 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:19:16.987494 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:19:16.991490 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:16.992761 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:19:16.993689 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:19:16.996843 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:19:16.996934 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:19:16.997957 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:19:16.998024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:19:16.999023 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:19:16.999115 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:19:17.000607 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:19:17.000684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:19:17.002875 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:19:17.004962 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:19:17.007435 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:19:17.007614 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:19:17.010969 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:19:17.011008 systemd-networkd[748]: eth0: DHCPv6 lease lost Jan 17 00:19:17.011180 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:17.012349 systemd-networkd[748]: eth1: DHCPv6 lease lost Jan 17 00:19:17.015238 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:19:17.015499 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:19:17.017846 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:19:17.018006 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:19:17.025978 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:19:17.026057 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:17.033701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:19:17.035428 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:19:17.035562 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:19:17.039508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:19:17.039595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:17.041368 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:19:17.041437 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:17.044938 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:19:17.045039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:17.046777 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:17.062017 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:19:17.062311 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:17.064656 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:19:17.064806 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:19:17.068057 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:19:17.068190 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:17.070330 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:19:17.070410 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:17.072092 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:19:17.072194 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:19:17.074693 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:19:17.074789 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:19:17.076405 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:19:17.076497 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:17.084677 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:19:17.085726 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:19:17.085839 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:17.089518 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:19:17.089623 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:19:17.091815 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:19:17.091905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:17.093721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:17.093814 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:17.108222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:19:17.108492 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:19:17.110836 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:19:17.125788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:19:17.140787 systemd[1]: Switching root. Jan 17 00:19:17.206216 systemd-journald[185]: Journal stopped Jan 17 00:19:18.555310 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 17 00:19:18.555402 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:19:18.555424 kernel: SELinux: policy capability open_perms=1 Jan 17 00:19:18.555437 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:19:18.555449 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:19:18.555460 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:19:18.555473 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:19:18.555484 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:19:18.555502 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:19:18.555514 kernel: audit: type=1403 audit(1768609157.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:19:18.555534 systemd[1]: Successfully loaded SELinux policy in 49.979ms. Jan 17 00:19:18.555557 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.589ms. Jan 17 00:19:18.555571 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:19:18.555590 systemd[1]: Detected virtualization kvm. Jan 17 00:19:18.555604 systemd[1]: Detected architecture x86-64. Jan 17 00:19:18.555616 systemd[1]: Detected first boot. Jan 17 00:19:18.555632 systemd[1]: Hostname set to . Jan 17 00:19:18.555646 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:19:18.555659 zram_generator::config[1031]: No configuration found. Jan 17 00:19:18.555677 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:19:18.555691 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:19:18.555703 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:19:18.555716 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:18.555743 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:19:18.555767 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:19:18.555787 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:19:18.555807 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:19:18.555821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:19:18.555833 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:19:18.555846 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:19:18.555858 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:19:18.555871 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:18.555884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:18.555900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:19:18.555913 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:19:18.555925 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:19:18.555941 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:19:18.555963 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:19:18.555982 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:18.556001 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:19:18.556025 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:19:18.556045 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:18.556066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:19:18.556086 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:18.556102 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:19:18.556115 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:19:18.556128 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:19:18.556141 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:19:18.556158 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:19:18.556170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:18.556185 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:18.556207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:18.556226 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:19:18.556244 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:19:18.556286 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:19:18.556307 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:19:18.556326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.556345 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:19:18.556358 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:19:18.556370 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:19:18.556384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:19:18.556404 systemd[1]: Reached target machines.target - Containers. Jan 17 00:19:18.556416 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:19:18.556428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:18.556440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:19:18.556456 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:19:18.556468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:18.556480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:18.556492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:18.556504 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:19:18.556516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:18.556535 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:19:18.556552 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:19:18.556564 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:19:18.556580 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:19:18.556592 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:19:18.556606 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:19:18.556617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:19:18.556630 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:19:18.556642 kernel: fuse: init (API version 7.39) Jan 17 00:19:18.556654 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:19:18.556667 kernel: loop: module loaded Jan 17 00:19:18.556679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:19:18.556695 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:19:18.556707 systemd[1]: Stopped verity-setup.service. Jan 17 00:19:18.556719 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.556732 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:19:18.556745 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:19:18.556756 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:19:18.556768 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:19:18.556781 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:19:18.556796 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:19:18.556808 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:19:18.556820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:18.556835 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:19:18.556847 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:19:18.556875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:18.556887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:18.556898 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:18.556910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:18.556923 kernel: ACPI: bus type drm_connector registered Jan 17 00:19:18.556935 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:18.556950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:18.557002 systemd-journald[1114]: Collecting audit messages is disabled. Jan 17 00:19:18.557034 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:19:18.557050 systemd-journald[1114]: Journal started Jan 17 00:19:18.557075 systemd-journald[1114]: Runtime Journal (/run/log/journal/6709f8c738c74f3386d1544774dbc90b) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:19:18.059318 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:19:18.082490 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:19:18.083161 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:19:18.560323 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:19:18.564407 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:19:18.565026 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:18.565285 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:18.566433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:18.567687 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:19:18.568834 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:19:18.583710 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:19:18.591481 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:19:18.600434 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:19:18.601545 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:19:18.601611 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:19:18.606892 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:19:18.614934 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:19:18.621719 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:19:18.623599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:18.633116 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:19:18.638154 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:19:18.639009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:18.644667 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:19:18.645552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:18.650559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:19:18.655539 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:19:18.661541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:19:18.667916 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:18.670093 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:19:18.671714 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:19:18.674122 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:19:18.699373 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:19:18.709498 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:19:18.725877 systemd-journald[1114]: Time spent on flushing to /var/log/journal/6709f8c738c74f3386d1544774dbc90b is 57.660ms for 989 entries. Jan 17 00:19:18.725877 systemd-journald[1114]: System Journal (/var/log/journal/6709f8c738c74f3386d1544774dbc90b) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:19:18.847610 systemd-journald[1114]: Received client request to flush runtime journal. Jan 17 00:19:18.851212 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:19:18.763916 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:19:18.765935 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:19:18.775755 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:19:18.833400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:18.842663 udevadm[1157]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:19:18.855361 kernel: loop1: detected capacity change from 0 to 8 Jan 17 00:19:18.856957 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:19:18.868530 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Jan 17 00:19:18.868560 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Jan 17 00:19:18.874532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:19:18.876739 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:19:18.896086 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:19:18.914315 kernel: loop2: detected capacity change from 0 to 219144 Jan 17 00:19:18.907598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:19:18.972307 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:19:18.990807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:19:19.006650 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:19:19.037423 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:19:19.075282 kernel: loop5: detected capacity change from 0 to 8 Jan 17 00:19:19.085961 kernel: loop6: detected capacity change from 0 to 219144 Jan 17 00:19:19.098791 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 17 00:19:19.099928 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 17 00:19:19.112302 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 00:19:19.118483 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:19.142531 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 00:19:19.144029 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 17 00:19:19.166122 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:19:19.166144 systemd[1]: Reloading... Jan 17 00:19:19.364491 zram_generator::config[1209]: No configuration found. Jan 17 00:19:19.467074 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:19:19.622773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:19.714219 systemd[1]: Reloading finished in 545 ms. Jan 17 00:19:19.739922 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:19:19.742158 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:19:19.755696 systemd[1]: Starting ensure-sysext.service... Jan 17 00:19:19.761596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:19:19.784372 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:19:19.784416 systemd[1]: Reloading... Jan 17 00:19:19.824541 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:19:19.825589 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:19:19.826807 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:19:19.827220 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 17 00:19:19.827401 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 17 00:19:19.830955 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:19.831154 systemd-tmpfiles[1250]: Skipping /boot Jan 17 00:19:19.849103 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:19.851512 systemd-tmpfiles[1250]: Skipping /boot Jan 17 00:19:19.929297 zram_generator::config[1279]: No configuration found. Jan 17 00:19:20.079728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:20.162710 systemd[1]: Reloading finished in 377 ms. Jan 17 00:19:20.183224 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:19:20.195427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:20.222942 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:20.227917 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:19:20.235432 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:19:20.247831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:19:20.253724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:20.265355 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:19:20.272104 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.272416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.282787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:20.289758 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:20.297247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:20.299619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.299889 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.302635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:19:20.317853 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:19:20.328779 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:19:20.336191 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.336541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.336818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.336955 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.345789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.346130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.356777 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:20.357926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.358102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.365383 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:19:20.369004 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:20.371439 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:20.376954 systemd[1]: Finished ensure-sysext.service. Jan 17 00:19:20.394219 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:19:20.396349 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:19:20.397164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:20.397434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:20.401671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:20.406002 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:19:20.416989 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 17 00:19:20.422174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:20.424096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:20.425831 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:20.426357 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:20.428954 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:19:20.431839 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:20.462469 augenrules[1359]: No rules Jan 17 00:19:20.464563 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:20.465652 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:20.479317 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:19:20.480947 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:19:20.656065 systemd-resolved[1327]: Positive Trust Anchors: Jan 17 00:19:20.656092 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:19:20.656129 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:19:20.664009 systemd-resolved[1327]: Using system hostname 'ci-4081.3.6-n-8d0945b27f'. Jan 17 00:19:20.666136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:19:20.667044 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:20.683879 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:19:20.699982 systemd-networkd[1368]: lo: Link UP Jan 17 00:19:20.700410 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:19:20.700440 systemd-networkd[1368]: lo: Gained carrier Jan 17 00:19:20.702060 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:19:20.703263 systemd-networkd[1368]: Enumeration completed Jan 17 00:19:20.703694 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:19:20.703784 systemd-networkd[1368]: eth0: Configuring with /run/systemd/network/10-d6:ff:96:de:35:82.network. Jan 17 00:19:20.704798 systemd[1]: Reached target network.target - Network. Jan 17 00:19:20.705639 systemd-networkd[1368]: eth0: Link UP Jan 17 00:19:20.705647 systemd-networkd[1368]: eth0: Gained carrier Jan 17 00:19:20.716656 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:19:20.752679 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 00:19:20.753520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.753743 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.761396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:20.766402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:20.775501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:20.776665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.776714 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:19:20.776734 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.786618 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 00:19:20.789007 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 00:19:20.791120 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1385) Jan 17 00:19:20.819735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:20.819942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:20.828805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:19:20.839940 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:19:20.844152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:20.844395 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:20.846523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:20.851990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:20.852077 systemd-networkd[1368]: eth1: Configuring with /run/systemd/network/10-8e:5b:ac:7a:59:0b.network. Jan 17 00:19:20.853473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:20.855169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:20.855859 systemd-networkd[1368]: eth1: Link UP Jan 17 00:19:20.855871 systemd-networkd[1368]: eth1: Gained carrier Jan 17 00:19:20.875819 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:19:20.901530 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:19:20.914349 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:19:20.938326 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 00:19:20.943062 systemd-timesyncd[1350]: Contacted time server 172.234.25.10:123 (0.flatcar.pool.ntp.org). Jan 17 00:19:20.943154 systemd-timesyncd[1350]: Initial clock synchronization to Sat 2026-01-17 00:19:21.048127 UTC. Jan 17 00:19:21.001935 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:19:21.051847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:21.066318 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:19:21.096301 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 00:19:21.101475 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 00:19:21.120536 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:19:21.123824 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:19:21.123945 kernel: [drm] features: -context_init Jan 17 00:19:21.140126 kernel: [drm] number of scanouts: 1 Jan 17 00:19:21.139785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:21.141837 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:21.144336 kernel: [drm] number of cap sets: 0 Jan 17 00:19:21.150369 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 00:19:21.151519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:21.161768 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:19:21.161862 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:19:21.180337 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:19:21.194679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:21.195479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:21.212651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:21.279304 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:19:21.288431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:21.304124 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:19:21.310664 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:19:21.339748 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:21.370699 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:19:21.371932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:21.372060 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:19:21.372251 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:19:21.372379 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:19:21.372660 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:19:21.372802 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:19:21.372872 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:19:21.372926 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:19:21.372953 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:19:21.373001 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:19:21.374871 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:19:21.379087 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:19:21.386668 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:19:21.390671 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:19:21.392062 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:19:21.392842 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:19:21.395976 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:19:21.399301 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:21.399640 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:21.411513 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:19:21.414696 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:21.420542 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:19:21.428590 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:19:21.448434 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:19:21.458903 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:19:21.461111 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:19:21.471923 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:19:21.485307 jq[1442]: false Jan 17 00:19:21.489859 coreos-metadata[1440]: Jan 17 00:19:21.482 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:21.486285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:19:21.500423 dbus-daemon[1441]: [system] SELinux support is enabled Jan 17 00:19:21.494105 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:19:21.506968 coreos-metadata[1440]: Jan 17 00:19:21.497 INFO Fetch successful Jan 17 00:19:21.505564 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:19:21.513551 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:19:21.516088 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:19:21.518923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:19:21.526069 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:19:21.536453 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:19:21.537766 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:19:21.539704 extend-filesystems[1445]: Found loop4 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found loop5 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found loop6 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found loop7 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda1 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda2 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda3 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found usr Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda4 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda6 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda7 Jan 17 00:19:21.542190 extend-filesystems[1445]: Found vda9 Jan 17 00:19:21.614301 extend-filesystems[1445]: Checking size of /dev/vda9 Jan 17 00:19:21.547510 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:19:21.563714 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:19:21.564375 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:19:21.564810 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:19:21.565385 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:19:21.634549 jq[1459]: true Jan 17 00:19:21.578945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:19:21.579183 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:19:21.608260 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:19:21.608351 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:19:21.610783 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:19:21.610863 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 00:19:21.610890 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:19:21.648789 tar[1466]: linux-amd64/LICENSE Jan 17 00:19:21.660933 tar[1466]: linux-amd64/helm Jan 17 00:19:21.665073 extend-filesystems[1445]: Resized partition /dev/vda9 Jan 17 00:19:21.677332 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:19:21.685312 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 00:19:21.698715 jq[1476]: true Jan 17 00:19:21.712107 update_engine[1458]: I20260117 00:19:21.708130 1458 main.cc:92] Flatcar Update Engine starting Jan 17 00:19:21.719749 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:19:21.729376 update_engine[1458]: I20260117 00:19:21.728636 1458 update_check_scheduler.cc:74] Next update check in 8m12s Jan 17 00:19:21.732521 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:19:21.743054 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:19:21.759878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1382) Jan 17 00:19:21.760982 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:19:21.763326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:19:21.863330 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 00:19:21.885757 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:19:21.885757 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 00:19:21.885757 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 00:19:21.889735 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Jan 17 00:19:21.889735 extend-filesystems[1445]: Found vdb Jan 17 00:19:21.891549 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:19:21.891850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:19:21.892982 systemd-logind[1452]: New seat seat0. Jan 17 00:19:21.919446 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:19:21.923001 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:19:21.930726 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:19:21.945410 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:21.945342 systemd-networkd[1368]: eth0: Gained IPv6LL Jan 17 00:19:21.947612 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:19:21.954873 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:19:21.961429 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:19:21.983935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:21.996749 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:19:22.010896 systemd[1]: Starting sshkeys.service... Jan 17 00:19:22.109907 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:19:22.115721 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:19:22.137815 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:19:22.215956 coreos-metadata[1519]: Jan 17 00:19:22.215 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:22.222971 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:19:22.241759 coreos-metadata[1519]: Jan 17 00:19:22.240 INFO Fetch successful Jan 17 00:19:22.265443 unknown[1519]: wrote ssh authorized keys file for user: core Jan 17 00:19:22.306484 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:22.309720 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:19:22.313856 systemd[1]: Finished sshkeys.service. Jan 17 00:19:22.327728 systemd-networkd[1368]: eth1: Gained IPv6LL Jan 17 00:19:22.373921 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:19:22.436055 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:19:22.452237 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:19:22.501139 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:19:22.501490 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:19:22.510893 containerd[1481]: time="2026-01-17T00:19:22.510682733Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:19:22.511882 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:19:22.591439 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:19:22.607016 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:19:22.627587 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:19:22.630367 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:19:22.650707 containerd[1481]: time="2026-01-17T00:19:22.649169309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.656613 containerd[1481]: time="2026-01-17T00:19:22.656547689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.656770 containerd[1481]: time="2026-01-17T00:19:22.656756045Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:19:22.656826 containerd[1481]: time="2026-01-17T00:19:22.656815551Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:19:22.657516 containerd[1481]: time="2026-01-17T00:19:22.657485777Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:19:22.658895 containerd[1481]: time="2026-01-17T00:19:22.658861132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.662348 containerd[1481]: time="2026-01-17T00:19:22.661783578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.662348 containerd[1481]: time="2026-01-17T00:19:22.661822687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.662348 containerd[1481]: time="2026-01-17T00:19:22.662112949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.662348 containerd[1481]: time="2026-01-17T00:19:22.662138346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.662348 containerd[1481]: time="2026-01-17T00:19:22.662159904Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.662348 containerd[1481]: time="2026-01-17T00:19:22.662176112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.662719 containerd[1481]: time="2026-01-17T00:19:22.662689534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.665256 containerd[1481]: time="2026-01-17T00:19:22.665227162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.666325 containerd[1481]: time="2026-01-17T00:19:22.666292331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.666608 containerd[1481]: time="2026-01-17T00:19:22.666396586Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:19:22.667047 containerd[1481]: time="2026-01-17T00:19:22.667020833Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:19:22.669788 containerd[1481]: time="2026-01-17T00:19:22.669464132Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:19:22.679426 containerd[1481]: time="2026-01-17T00:19:22.679369658Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:19:22.679640 containerd[1481]: time="2026-01-17T00:19:22.679626517Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:19:22.679696 containerd[1481]: time="2026-01-17T00:19:22.679685918Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:19:22.679746 containerd[1481]: time="2026-01-17T00:19:22.679736845Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:19:22.679839 containerd[1481]: time="2026-01-17T00:19:22.679824537Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:19:22.680973 containerd[1481]: time="2026-01-17T00:19:22.680222721Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681466697Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681670245Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681690321Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681715294Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681733910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681750752Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681764275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681781072Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681796611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681810968Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681824753Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681841522Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681929545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682292 containerd[1481]: time="2026-01-17T00:19:22.681948234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.681963505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.681979981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.681994359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682008512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682037678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682062897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682077398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682092809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682105423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682119578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682133444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682149409Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682173590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682188392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.682606 containerd[1481]: time="2026-01-17T00:19:22.682200005Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:19:22.683328 containerd[1481]: time="2026-01-17T00:19:22.682256374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:19:22.683422 containerd[1481]: time="2026-01-17T00:19:22.683404783Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:19:22.683462 containerd[1481]: time="2026-01-17T00:19:22.683453304Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:19:22.683506 containerd[1481]: time="2026-01-17T00:19:22.683495919Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:19:22.686367 containerd[1481]: time="2026-01-17T00:19:22.684812006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.686367 containerd[1481]: time="2026-01-17T00:19:22.684863975Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:19:22.686367 containerd[1481]: time="2026-01-17T00:19:22.684897867Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:19:22.686367 containerd[1481]: time="2026-01-17T00:19:22.684915474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.686530 containerd[1481]: time="2026-01-17T00:19:22.685412272Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:19:22.686530 containerd[1481]: time="2026-01-17T00:19:22.685575141Z" level=info msg="Connect containerd service" Jan 17 00:19:22.686530 containerd[1481]: time="2026-01-17T00:19:22.685643402Z" level=info msg="using legacy CRI server" Jan 17 00:19:22.686530 containerd[1481]: time="2026-01-17T00:19:22.685652293Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:19:22.686530 containerd[1481]: time="2026-01-17T00:19:22.685780933Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:19:22.691650 containerd[1481]: time="2026-01-17T00:19:22.690869775Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:19:22.691650 containerd[1481]: time="2026-01-17T00:19:22.691160364Z" level=info msg="Start subscribing containerd event" Jan 17 00:19:22.691650 containerd[1481]: time="2026-01-17T00:19:22.691324433Z" level=info msg="Start recovering state" Jan 17 00:19:22.693435 containerd[1481]: time="2026-01-17T00:19:22.693379597Z" level=info msg="Start event monitor" Jan 17 00:19:22.693541 containerd[1481]: time="2026-01-17T00:19:22.693471707Z" level=info msg="Start snapshots syncer" Jan 17 00:19:22.693541 containerd[1481]: time="2026-01-17T00:19:22.693494634Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:19:22.693541 containerd[1481]: time="2026-01-17T00:19:22.693510114Z" level=info msg="Start streaming server" Jan 17 00:19:22.695197 containerd[1481]: time="2026-01-17T00:19:22.693373815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:19:22.695197 containerd[1481]: time="2026-01-17T00:19:22.693811490Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:19:22.694718 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:19:22.698670 containerd[1481]: time="2026-01-17T00:19:22.697905075Z" level=info msg="containerd successfully booted in 0.188814s" Jan 17 00:19:23.108322 tar[1466]: linux-amd64/README.md Jan 17 00:19:23.143049 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:19:23.562013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:23.567051 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:19:23.572725 systemd[1]: Startup finished in 1.416s (kernel) + 5.619s (initrd) + 6.246s (userspace) = 13.282s. Jan 17 00:19:23.580934 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:24.313734 kubelet[1563]: E0117 00:19:24.313671 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:24.317239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:24.317463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:24.318157 systemd[1]: kubelet.service: Consumed 1.453s CPU time. Jan 17 00:19:25.552627 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:19:25.569860 systemd[1]: Started sshd@0-165.232.147.124:22-4.153.228.146:45914.service - OpenSSH per-connection server daemon (4.153.228.146:45914). Jan 17 00:19:25.998592 sshd[1575]: Accepted publickey for core from 4.153.228.146 port 45914 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:26.000933 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:26.013437 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:19:26.019871 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:19:26.023578 systemd-logind[1452]: New session 1 of user core. Jan 17 00:19:26.052113 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:19:26.062070 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:19:26.075022 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:19:26.234399 systemd[1579]: Queued start job for default target default.target. Jan 17 00:19:26.242156 systemd[1579]: Created slice app.slice - User Application Slice. Jan 17 00:19:26.242450 systemd[1579]: Reached target paths.target - Paths. Jan 17 00:19:26.242573 systemd[1579]: Reached target timers.target - Timers. Jan 17 00:19:26.244669 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:19:26.269886 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:19:26.270055 systemd[1579]: Reached target sockets.target - Sockets. Jan 17 00:19:26.270072 systemd[1579]: Reached target basic.target - Basic System. Jan 17 00:19:26.270144 systemd[1579]: Reached target default.target - Main User Target. Jan 17 00:19:26.270182 systemd[1579]: Startup finished in 184ms. Jan 17 00:19:26.270429 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:19:26.284714 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:19:26.607763 systemd[1]: Started sshd@1-165.232.147.124:22-4.153.228.146:45926.service - OpenSSH per-connection server daemon (4.153.228.146:45926). Jan 17 00:19:27.050987 sshd[1590]: Accepted publickey for core from 4.153.228.146 port 45926 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:27.052505 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:27.058127 systemd-logind[1452]: New session 2 of user core. Jan 17 00:19:27.065571 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:19:27.389363 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:27.393658 systemd[1]: sshd@1-165.232.147.124:22-4.153.228.146:45926.service: Deactivated successfully. Jan 17 00:19:27.395590 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:19:27.397692 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:19:27.399042 systemd-logind[1452]: Removed session 2. Jan 17 00:19:27.479907 systemd[1]: Started sshd@2-165.232.147.124:22-4.153.228.146:45940.service - OpenSSH per-connection server daemon (4.153.228.146:45940). Jan 17 00:19:27.934947 sshd[1597]: Accepted publickey for core from 4.153.228.146 port 45940 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:27.937580 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:27.943235 systemd-logind[1452]: New session 3 of user core. Jan 17 00:19:27.949628 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:19:28.262013 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:28.265673 systemd[1]: sshd@2-165.232.147.124:22-4.153.228.146:45940.service: Deactivated successfully. Jan 17 00:19:28.267855 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:19:28.269822 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:19:28.271087 systemd-logind[1452]: Removed session 3. Jan 17 00:19:28.346022 systemd[1]: Started sshd@3-165.232.147.124:22-4.153.228.146:45948.service - OpenSSH per-connection server daemon (4.153.228.146:45948). Jan 17 00:19:28.774309 sshd[1604]: Accepted publickey for core from 4.153.228.146 port 45948 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:28.776183 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:28.782342 systemd-logind[1452]: New session 4 of user core. Jan 17 00:19:28.792614 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:19:29.090842 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:29.094381 systemd[1]: sshd@3-165.232.147.124:22-4.153.228.146:45948.service: Deactivated successfully. Jan 17 00:19:29.096597 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:19:29.099084 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:19:29.100333 systemd-logind[1452]: Removed session 4. Jan 17 00:19:29.166701 systemd[1]: Started sshd@4-165.232.147.124:22-4.153.228.146:45964.service - OpenSSH per-connection server daemon (4.153.228.146:45964). Jan 17 00:19:29.559761 sshd[1611]: Accepted publickey for core from 4.153.228.146 port 45964 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:29.561618 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:29.567656 systemd-logind[1452]: New session 5 of user core. Jan 17 00:19:29.576746 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:19:29.806758 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:19:29.807143 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:29.820542 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:29.882301 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:29.886469 systemd[1]: sshd@4-165.232.147.124:22-4.153.228.146:45964.service: Deactivated successfully. Jan 17 00:19:29.888687 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:19:29.890527 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:19:29.892421 systemd-logind[1452]: Removed session 5. Jan 17 00:19:29.959710 systemd[1]: Started sshd@5-165.232.147.124:22-4.153.228.146:45972.service - OpenSSH per-connection server daemon (4.153.228.146:45972). Jan 17 00:19:30.397142 sshd[1619]: Accepted publickey for core from 4.153.228.146 port 45972 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:30.398846 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:30.404132 systemd-logind[1452]: New session 6 of user core. Jan 17 00:19:30.410668 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:19:30.645051 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:19:30.645483 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:30.650545 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:30.658218 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:19:30.658559 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:30.682239 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:30.684030 auditctl[1626]: No rules Jan 17 00:19:30.684448 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:19:30.684749 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:30.687757 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:30.736351 augenrules[1644]: No rules Jan 17 00:19:30.737126 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:30.738677 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:30.806576 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:30.810397 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:19:30.810565 systemd[1]: sshd@5-165.232.147.124:22-4.153.228.146:45972.service: Deactivated successfully. Jan 17 00:19:30.812361 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:19:30.814121 systemd-logind[1452]: Removed session 6. Jan 17 00:19:30.888651 systemd[1]: Started sshd@6-165.232.147.124:22-4.153.228.146:45976.service - OpenSSH per-connection server daemon (4.153.228.146:45976). Jan 17 00:19:31.316112 sshd[1652]: Accepted publickey for core from 4.153.228.146 port 45976 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:31.318198 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:31.323471 systemd-logind[1452]: New session 7 of user core. Jan 17 00:19:31.331768 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:19:31.564524 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:19:31.564980 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:32.029718 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:19:32.029821 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:19:32.509520 dockerd[1670]: time="2026-01-17T00:19:32.508493221Z" level=info msg="Starting up" Jan 17 00:19:32.676104 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport973226005-merged.mount: Deactivated successfully. Jan 17 00:19:32.687220 systemd[1]: var-lib-docker-metacopy\x2dcheck1320321958-merged.mount: Deactivated successfully. Jan 17 00:19:32.711301 dockerd[1670]: time="2026-01-17T00:19:32.711095740Z" level=info msg="Loading containers: start." Jan 17 00:19:32.865311 kernel: Initializing XFRM netlink socket Jan 17 00:19:32.964067 systemd-networkd[1368]: docker0: Link UP Jan 17 00:19:32.986963 dockerd[1670]: time="2026-01-17T00:19:32.986892203Z" level=info msg="Loading containers: done." Jan 17 00:19:33.007081 dockerd[1670]: time="2026-01-17T00:19:33.007014870Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:19:33.007328 dockerd[1670]: time="2026-01-17T00:19:33.007156044Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:19:33.007328 dockerd[1670]: time="2026-01-17T00:19:33.007296252Z" level=info msg="Daemon has completed initialization" Jan 17 00:19:33.051958 dockerd[1670]: time="2026-01-17T00:19:33.051870952Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:19:33.052187 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:19:33.952994 containerd[1481]: time="2026-01-17T00:19:33.952669355Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:19:34.568096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:34.579615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:34.776187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:34.788134 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:34.872228 kubelet[1825]: E0117 00:19:34.871693 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:34.877898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:34.878093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:34.887368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748155148.mount: Deactivated successfully. Jan 17 00:19:36.281144 containerd[1481]: time="2026-01-17T00:19:36.281056611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:36.284084 containerd[1481]: time="2026-01-17T00:19:36.283688822Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 17 00:19:36.285368 containerd[1481]: time="2026-01-17T00:19:36.285232911Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:36.291936 containerd[1481]: time="2026-01-17T00:19:36.291124980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:36.293673 containerd[1481]: time="2026-01-17T00:19:36.293608228Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.340882942s" Jan 17 00:19:36.293949 containerd[1481]: time="2026-01-17T00:19:36.293923222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:19:36.294944 containerd[1481]: time="2026-01-17T00:19:36.294903400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:19:37.995289 containerd[1481]: time="2026-01-17T00:19:37.994082414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:37.996571 containerd[1481]: time="2026-01-17T00:19:37.996513312Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 17 00:19:37.997368 containerd[1481]: time="2026-01-17T00:19:37.997339162Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:38.001651 containerd[1481]: time="2026-01-17T00:19:38.001596012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:38.003699 containerd[1481]: time="2026-01-17T00:19:38.003622855Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.708485004s" Jan 17 00:19:38.003846 containerd[1481]: time="2026-01-17T00:19:38.003722926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:19:38.005693 containerd[1481]: time="2026-01-17T00:19:38.005353055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:19:39.466455 containerd[1481]: time="2026-01-17T00:19:39.466374989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:39.467931 containerd[1481]: time="2026-01-17T00:19:39.467867375Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 17 00:19:39.469485 containerd[1481]: time="2026-01-17T00:19:39.468647315Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:39.472958 containerd[1481]: time="2026-01-17T00:19:39.472902750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:39.474907 containerd[1481]: time="2026-01-17T00:19:39.474853655Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.469457365s" Jan 17 00:19:39.474907 containerd[1481]: time="2026-01-17T00:19:39.474907080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:19:39.475511 containerd[1481]: time="2026-01-17T00:19:39.475473370Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:19:40.783062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842513581.mount: Deactivated successfully. Jan 17 00:19:41.263656 containerd[1481]: time="2026-01-17T00:19:41.263492582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.264793 containerd[1481]: time="2026-01-17T00:19:41.264543726Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 17 00:19:41.265721 containerd[1481]: time="2026-01-17T00:19:41.265413608Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.268072 containerd[1481]: time="2026-01-17T00:19:41.268013063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.269343 containerd[1481]: time="2026-01-17T00:19:41.269289801Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.793752851s" Jan 17 00:19:41.269715 containerd[1481]: time="2026-01-17T00:19:41.269509873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:19:41.270169 containerd[1481]: time="2026-01-17T00:19:41.270147025Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:19:41.272064 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 00:19:42.039499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831892277.mount: Deactivated successfully. Jan 17 00:19:43.497948 containerd[1481]: time="2026-01-17T00:19:43.497895109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.500347 containerd[1481]: time="2026-01-17T00:19:43.500219745Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 17 00:19:43.501357 containerd[1481]: time="2026-01-17T00:19:43.501281638Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.507285 containerd[1481]: time="2026-01-17T00:19:43.505016543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.507285 containerd[1481]: time="2026-01-17T00:19:43.507037983Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.23677578s" Jan 17 00:19:43.507285 containerd[1481]: time="2026-01-17T00:19:43.507094960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:19:43.508346 containerd[1481]: time="2026-01-17T00:19:43.508304237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:19:44.257671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911444363.mount: Deactivated successfully. Jan 17 00:19:44.266479 containerd[1481]: time="2026-01-17T00:19:44.266397398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:44.267863 containerd[1481]: time="2026-01-17T00:19:44.267748633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 17 00:19:44.270207 containerd[1481]: time="2026-01-17T00:19:44.268498743Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:44.271141 containerd[1481]: time="2026-01-17T00:19:44.271091131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:44.273210 containerd[1481]: time="2026-01-17T00:19:44.273134544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 764.786768ms" Jan 17 00:19:44.273210 containerd[1481]: time="2026-01-17T00:19:44.273200138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:19:44.274026 containerd[1481]: time="2026-01-17T00:19:44.273986041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:19:44.343708 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 00:19:44.891176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:19:44.900738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:44.935644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039009785.mount: Deactivated successfully. Jan 17 00:19:45.172798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:45.192897 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:45.334826 kubelet[1982]: E0117 00:19:45.334759 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:45.337921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:45.338125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:47.605037 containerd[1481]: time="2026-01-17T00:19:47.604969867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:47.606424 containerd[1481]: time="2026-01-17T00:19:47.606085141Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 17 00:19:47.607284 containerd[1481]: time="2026-01-17T00:19:47.606900026Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:47.611167 containerd[1481]: time="2026-01-17T00:19:47.611089915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:47.612191 containerd[1481]: time="2026-01-17T00:19:47.612143025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.338121438s" Jan 17 00:19:47.612319 containerd[1481]: time="2026-01-17T00:19:47.612194917Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:19:51.912315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:51.925630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:51.993871 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Jan 17 00:19:51.994092 systemd[1]: Reloading... Jan 17 00:19:52.124374 zram_generator::config[2095]: No configuration found. Jan 17 00:19:52.314040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:52.397087 systemd[1]: Reloading finished in 402 ms. Jan 17 00:19:52.450390 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:19:52.450479 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:19:52.450747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:52.456827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:52.604622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:52.622771 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:19:52.674088 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:19:52.674716 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:52.675234 kubelet[2151]: I0117 00:19:52.675196 2151 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:19:53.416125 kubelet[2151]: I0117 00:19:53.416073 2151 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:19:53.416413 kubelet[2151]: I0117 00:19:53.416400 2151 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:19:53.417172 kubelet[2151]: I0117 00:19:53.417145 2151 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:19:53.417295 kubelet[2151]: I0117 00:19:53.417280 2151 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:19:53.417706 kubelet[2151]: I0117 00:19:53.417688 2151 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:19:53.435779 kubelet[2151]: I0117 00:19:53.434747 2151 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:19:53.438110 kubelet[2151]: E0117 00:19:53.438074 2151 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://165.232.147.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:19:53.443859 kubelet[2151]: E0117 00:19:53.443780 2151 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:19:53.444006 kubelet[2151]: I0117 00:19:53.443924 2151 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:19:53.453868 kubelet[2151]: I0117 00:19:53.453814 2151 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:19:53.455037 kubelet[2151]: I0117 00:19:53.454941 2151 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:19:53.456928 kubelet[2151]: I0117 00:19:53.455022 2151 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8d0945b27f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:19:53.457096 kubelet[2151]: I0117 00:19:53.456950 2151 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:19:53.457096 kubelet[2151]: I0117 00:19:53.456971 2151 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:19:53.457145 kubelet[2151]: I0117 00:19:53.457137 2151 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:19:53.461711 kubelet[2151]: I0117 00:19:53.461656 2151 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:53.463350 kubelet[2151]: I0117 00:19:53.463307 2151 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:19:53.463819 kubelet[2151]: I0117 00:19:53.463786 2151 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:19:53.463870 kubelet[2151]: I0117 00:19:53.463865 2151 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:19:53.463910 kubelet[2151]: I0117 00:19:53.463879 2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:19:53.465206 kubelet[2151]: E0117 00:19:53.465171 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://165.232.147.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8d0945b27f&limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:19:53.467523 kubelet[2151]: E0117 00:19:53.467183 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://165.232.147.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:53.467790 kubelet[2151]: I0117 00:19:53.467774 2151 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:19:53.469979 kubelet[2151]: I0117 00:19:53.469954 2151 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:19:53.470279 kubelet[2151]: I0117 00:19:53.470091 2151 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:19:53.470279 kubelet[2151]: W0117 00:19:53.470171 2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:19:53.473659 kubelet[2151]: I0117 00:19:53.473590 2151 server.go:1262] "Started kubelet" Jan 17 00:19:53.481995 kubelet[2151]: I0117 00:19:53.480598 2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:19:53.487226 kubelet[2151]: I0117 00:19:53.486751 2151 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:19:53.487226 kubelet[2151]: I0117 00:19:53.486827 2151 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:19:53.487226 kubelet[2151]: I0117 00:19:53.487101 2151 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:19:53.488578 kubelet[2151]: I0117 00:19:53.488479 2151 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:19:53.490832 kubelet[2151]: I0117 00:19:53.490703 2151 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:19:53.495867 kubelet[2151]: I0117 00:19:53.494574 2151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:19:53.495867 kubelet[2151]: E0117 00:19:53.495177 2151 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" Jan 17 00:19:53.495867 kubelet[2151]: I0117 00:19:53.495248 2151 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:19:53.496574 kubelet[2151]: I0117 00:19:53.496545 2151 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:19:53.496757 kubelet[2151]: I0117 00:19:53.496747 2151 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:19:53.498070 kubelet[2151]: E0117 00:19:53.498046 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://165.232.147.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:19:53.498382 kubelet[2151]: E0117 00:19:53.498342 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.147.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8d0945b27f?timeout=10s\": dial tcp 165.232.147.124:6443: connect: connection refused" interval="200ms" Jan 17 00:19:53.500747 kubelet[2151]: E0117 00:19:53.498796 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.147.124:6443/api/v1/namespaces/default/events\": dial tcp 165.232.147.124:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8d0945b27f.188b5cb0e37b368b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8d0945b27f,UID:ci-4081.3.6-n-8d0945b27f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8d0945b27f,},FirstTimestamp:2026-01-17 00:19:53.473554059 +0000 UTC m=+0.845535449,LastTimestamp:2026-01-17 00:19:53.473554059 +0000 UTC m=+0.845535449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8d0945b27f,}" Jan 17 00:19:53.502811 kubelet[2151]: E0117 00:19:53.502790 2151 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:19:53.503118 kubelet[2151]: I0117 00:19:53.503104 2151 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:19:53.503178 kubelet[2151]: I0117 00:19:53.503172 2151 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:19:53.504293 kubelet[2151]: I0117 00:19:53.503325 2151 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:19:53.519354 kubelet[2151]: I0117 00:19:53.519294 2151 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:19:53.521032 kubelet[2151]: I0117 00:19:53.521001 2151 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:19:53.521187 kubelet[2151]: I0117 00:19:53.521176 2151 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:19:53.521347 kubelet[2151]: I0117 00:19:53.521335 2151 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:19:53.521481 kubelet[2151]: E0117 00:19:53.521462 2151 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:19:53.531179 kubelet[2151]: E0117 00:19:53.531145 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://165.232.147.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:53.533279 kubelet[2151]: I0117 00:19:53.533215 2151 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:19:53.533279 kubelet[2151]: I0117 00:19:53.533234 2151 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:19:53.533279 kubelet[2151]: I0117 00:19:53.533274 2151 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:53.536696 kubelet[2151]: I0117 00:19:53.536637 2151 policy_none.go:49] "None policy: Start" Jan 17 00:19:53.536696 kubelet[2151]: I0117 00:19:53.536692 2151 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:19:53.536873 kubelet[2151]: I0117 00:19:53.536713 2151 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:19:53.539132 kubelet[2151]: I0117 00:19:53.539093 2151 policy_none.go:47] "Start" Jan 17 00:19:53.546534 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:19:53.557301 kubelet[2151]: E0117 00:19:53.557138 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.147.124:6443/api/v1/namespaces/default/events\": dial tcp 165.232.147.124:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8d0945b27f.188b5cb0e37b368b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8d0945b27f,UID:ci-4081.3.6-n-8d0945b27f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8d0945b27f,},FirstTimestamp:2026-01-17 00:19:53.473554059 +0000 UTC m=+0.845535449,LastTimestamp:2026-01-17 00:19:53.473554059 +0000 UTC m=+0.845535449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8d0945b27f,}" Jan 17 00:19:53.570809 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:19:53.575019 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:19:53.580615 kubelet[2151]: E0117 00:19:53.580560 2151 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:19:53.580903 kubelet[2151]: I0117 00:19:53.580875 2151 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:19:53.580961 kubelet[2151]: I0117 00:19:53.580901 2151 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:19:53.581728 kubelet[2151]: I0117 00:19:53.581702 2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:19:53.583755 kubelet[2151]: E0117 00:19:53.583555 2151 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:19:53.583755 kubelet[2151]: E0117 00:19:53.583635 2151 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-8d0945b27f\" not found" Jan 17 00:19:53.638488 systemd[1]: Created slice kubepods-burstable-podf4131dcfbec40939c356ae8e91311312.slice - libcontainer container kubepods-burstable-podf4131dcfbec40939c356ae8e91311312.slice. Jan 17 00:19:53.647491 kubelet[2151]: E0117 00:19:53.647453 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.652269 systemd[1]: Created slice kubepods-burstable-pod09ed2b5e98fbb02e7d27edc39fd3d7c4.slice - libcontainer container kubepods-burstable-pod09ed2b5e98fbb02e7d27edc39fd3d7c4.slice. Jan 17 00:19:53.664204 kubelet[2151]: E0117 00:19:53.664174 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.665729 systemd[1]: Created slice kubepods-burstable-poda82a9a470c8eb467796f5b995f6b7025.slice - libcontainer container kubepods-burstable-poda82a9a470c8eb467796f5b995f6b7025.slice. Jan 17 00:19:53.668736 kubelet[2151]: E0117 00:19:53.668619 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.682840 kubelet[2151]: I0117 00:19:53.682787 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.683329 kubelet[2151]: E0117 00:19:53.683222 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.147.124:6443/api/v1/nodes\": dial tcp 165.232.147.124:6443: connect: connection refused" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.697968 kubelet[2151]: I0117 00:19:53.697879 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4131dcfbec40939c356ae8e91311312-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" (UID: \"f4131dcfbec40939c356ae8e91311312\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.697968 kubelet[2151]: I0117 00:19:53.697946 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4131dcfbec40939c356ae8e91311312-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" (UID: \"f4131dcfbec40939c356ae8e91311312\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.697968 kubelet[2151]: I0117 00:19:53.697975 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.697968 kubelet[2151]: I0117 00:19:53.697990 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4131dcfbec40939c356ae8e91311312-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" (UID: \"f4131dcfbec40939c356ae8e91311312\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.697968 kubelet[2151]: I0117 00:19:53.698007 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.698292 kubelet[2151]: I0117 00:19:53.698022 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.698292 kubelet[2151]: I0117 00:19:53.698158 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.698292 kubelet[2151]: I0117 00:19:53.698175 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.698292 kubelet[2151]: I0117 00:19:53.698192 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a82a9a470c8eb467796f5b995f6b7025-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8d0945b27f\" (UID: \"a82a9a470c8eb467796f5b995f6b7025\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.699896 kubelet[2151]: E0117 00:19:53.699797 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.147.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8d0945b27f?timeout=10s\": dial tcp 165.232.147.124:6443: connect: connection refused" interval="400ms" Jan 17 00:19:53.884698 kubelet[2151]: I0117 00:19:53.884602 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.885101 kubelet[2151]: E0117 00:19:53.885063 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.147.124:6443/api/v1/nodes\": dial tcp 165.232.147.124:6443: connect: connection refused" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:53.952524 kubelet[2151]: E0117 00:19:53.951779 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:53.953010 containerd[1481]: time="2026-01-17T00:19:53.952960878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8d0945b27f,Uid:f4131dcfbec40939c356ae8e91311312,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:53.955839 systemd-resolved[1327]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 17 00:19:53.968684 kubelet[2151]: E0117 00:19:53.968301 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:53.975852 containerd[1481]: time="2026-01-17T00:19:53.975540865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8d0945b27f,Uid:09ed2b5e98fbb02e7d27edc39fd3d7c4,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:53.977247 kubelet[2151]: E0117 00:19:53.976246 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:53.979221 containerd[1481]: time="2026-01-17T00:19:53.978960746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8d0945b27f,Uid:a82a9a470c8eb467796f5b995f6b7025,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:54.100929 kubelet[2151]: E0117 00:19:54.100876 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.147.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8d0945b27f?timeout=10s\": dial tcp 165.232.147.124:6443: connect: connection refused" interval="800ms" Jan 17 00:19:54.287459 kubelet[2151]: I0117 00:19:54.287237 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:54.287459 kubelet[2151]: E0117 00:19:54.287676 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.147.124:6443/api/v1/nodes\": dial tcp 165.232.147.124:6443: connect: connection refused" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:54.502975 kubelet[2151]: E0117 00:19:54.502706 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://165.232.147.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8d0945b27f&limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:19:54.530473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035249991.mount: Deactivated successfully. Jan 17 00:19:54.589068 containerd[1481]: time="2026-01-17T00:19:54.588491660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:54.598336 containerd[1481]: time="2026-01-17T00:19:54.598155014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:19:54.598851 containerd[1481]: time="2026-01-17T00:19:54.598799815Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:54.602444 containerd[1481]: time="2026-01-17T00:19:54.602206456Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:54.604305 containerd[1481]: time="2026-01-17T00:19:54.604160580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:54.605439 containerd[1481]: time="2026-01-17T00:19:54.605244635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:54.606407 containerd[1481]: time="2026-01-17T00:19:54.606351054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:54.608688 containerd[1481]: time="2026-01-17T00:19:54.608411830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:54.614297 containerd[1481]: time="2026-01-17T00:19:54.614165608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 661.064696ms" Jan 17 00:19:54.617007 containerd[1481]: time="2026-01-17T00:19:54.616913840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.867196ms" Jan 17 00:19:54.623391 containerd[1481]: time="2026-01-17T00:19:54.623100814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 647.449394ms" Jan 17 00:19:54.659001 kubelet[2151]: E0117 00:19:54.658718 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://165.232.147.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:54.756956 kubelet[2151]: E0117 00:19:54.756653 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://165.232.147.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:19:54.885879 kubelet[2151]: E0117 00:19:54.885610 2151 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://165.232.147.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:54.902030 kubelet[2151]: E0117 00:19:54.901945 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.147.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8d0945b27f?timeout=10s\": dial tcp 165.232.147.124:6443: connect: connection refused" interval="1.6s" Jan 17 00:19:54.925146 containerd[1481]: time="2026-01-17T00:19:54.924920282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:54.925588 containerd[1481]: time="2026-01-17T00:19:54.925095696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:54.925588 containerd[1481]: time="2026-01-17T00:19:54.925141719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:54.926909 containerd[1481]: time="2026-01-17T00:19:54.926704760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:54.927101 containerd[1481]: time="2026-01-17T00:19:54.926343575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:54.927101 containerd[1481]: time="2026-01-17T00:19:54.926475704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:54.928007 containerd[1481]: time="2026-01-17T00:19:54.927229291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:54.928471 containerd[1481]: time="2026-01-17T00:19:54.928354499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:54.928835 containerd[1481]: time="2026-01-17T00:19:54.928705053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:54.931430 containerd[1481]: time="2026-01-17T00:19:54.928804387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:54.931430 containerd[1481]: time="2026-01-17T00:19:54.930445438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:54.931430 containerd[1481]: time="2026-01-17T00:19:54.930628341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:54.973202 systemd[1]: Started cri-containerd-d096513274998c63b844a88396c2d4e584c0c83d77ffaeac370a27b08e1858ce.scope - libcontainer container d096513274998c63b844a88396c2d4e584c0c83d77ffaeac370a27b08e1858ce. Jan 17 00:19:54.986863 systemd[1]: Started cri-containerd-26cb38b2a9cbe8c737792681944883ad5f892f9de1af974bab59ab044d37451b.scope - libcontainer container 26cb38b2a9cbe8c737792681944883ad5f892f9de1af974bab59ab044d37451b. Jan 17 00:19:54.991196 systemd[1]: Started cri-containerd-472b78562f3eb4ca362cae8ca019ab37a8377aa9dee33955dcdd60af7ee3b78e.scope - libcontainer container 472b78562f3eb4ca362cae8ca019ab37a8377aa9dee33955dcdd60af7ee3b78e. Jan 17 00:19:55.091379 kubelet[2151]: I0117 00:19:55.090067 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:55.094040 kubelet[2151]: E0117 00:19:55.093830 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.147.124:6443/api/v1/nodes\": dial tcp 165.232.147.124:6443: connect: connection refused" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:55.095086 containerd[1481]: time="2026-01-17T00:19:55.095041230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8d0945b27f,Uid:f4131dcfbec40939c356ae8e91311312,Namespace:kube-system,Attempt:0,} returns sandbox id \"472b78562f3eb4ca362cae8ca019ab37a8377aa9dee33955dcdd60af7ee3b78e\"" Jan 17 00:19:55.103012 kubelet[2151]: E0117 00:19:55.102516 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:55.107203 containerd[1481]: time="2026-01-17T00:19:55.106699618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8d0945b27f,Uid:09ed2b5e98fbb02e7d27edc39fd3d7c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"26cb38b2a9cbe8c737792681944883ad5f892f9de1af974bab59ab044d37451b\"" Jan 17 00:19:55.107921 kubelet[2151]: E0117 00:19:55.107728 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:55.124559 containerd[1481]: time="2026-01-17T00:19:55.124507425Z" level=info msg="CreateContainer within sandbox \"472b78562f3eb4ca362cae8ca019ab37a8377aa9dee33955dcdd60af7ee3b78e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:19:55.129865 containerd[1481]: time="2026-01-17T00:19:55.129793340Z" level=info msg="CreateContainer within sandbox \"26cb38b2a9cbe8c737792681944883ad5f892f9de1af974bab59ab044d37451b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:19:55.135705 containerd[1481]: time="2026-01-17T00:19:55.135633548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8d0945b27f,Uid:a82a9a470c8eb467796f5b995f6b7025,Namespace:kube-system,Attempt:0,} returns sandbox id \"d096513274998c63b844a88396c2d4e584c0c83d77ffaeac370a27b08e1858ce\"" Jan 17 00:19:55.137859 kubelet[2151]: E0117 00:19:55.137662 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:55.147561 containerd[1481]: time="2026-01-17T00:19:55.147312930Z" level=info msg="CreateContainer within sandbox \"d096513274998c63b844a88396c2d4e584c0c83d77ffaeac370a27b08e1858ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:19:55.159313 containerd[1481]: time="2026-01-17T00:19:55.159163638Z" level=info msg="CreateContainer within sandbox \"26cb38b2a9cbe8c737792681944883ad5f892f9de1af974bab59ab044d37451b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b838f96d454ba05ed24cce4bc327ce6ee5fe7fde03661317623008252003d81\"" Jan 17 00:19:55.160057 containerd[1481]: time="2026-01-17T00:19:55.159898965Z" level=info msg="CreateContainer within sandbox \"472b78562f3eb4ca362cae8ca019ab37a8377aa9dee33955dcdd60af7ee3b78e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c25286e46c523e8344851e2f9ed27466247b340e97f176f07c947396530204c7\"" Jan 17 00:19:55.161086 containerd[1481]: time="2026-01-17T00:19:55.160938313Z" level=info msg="StartContainer for \"c25286e46c523e8344851e2f9ed27466247b340e97f176f07c947396530204c7\"" Jan 17 00:19:55.161086 containerd[1481]: time="2026-01-17T00:19:55.161058204Z" level=info msg="StartContainer for \"4b838f96d454ba05ed24cce4bc327ce6ee5fe7fde03661317623008252003d81\"" Jan 17 00:19:55.178293 containerd[1481]: time="2026-01-17T00:19:55.178191803Z" level=info msg="CreateContainer within sandbox \"d096513274998c63b844a88396c2d4e584c0c83d77ffaeac370a27b08e1858ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f2bb573b187febce6d7de99cb2810c95ac599878d2856ffd73f2718bab9efdb9\"" Jan 17 00:19:55.179821 containerd[1481]: time="2026-01-17T00:19:55.179674445Z" level=info msg="StartContainer for \"f2bb573b187febce6d7de99cb2810c95ac599878d2856ffd73f2718bab9efdb9\"" Jan 17 00:19:55.209746 systemd[1]: Started cri-containerd-c25286e46c523e8344851e2f9ed27466247b340e97f176f07c947396530204c7.scope - libcontainer container c25286e46c523e8344851e2f9ed27466247b340e97f176f07c947396530204c7. Jan 17 00:19:55.232535 systemd[1]: Started cri-containerd-4b838f96d454ba05ed24cce4bc327ce6ee5fe7fde03661317623008252003d81.scope - libcontainer container 4b838f96d454ba05ed24cce4bc327ce6ee5fe7fde03661317623008252003d81. Jan 17 00:19:55.268631 systemd[1]: Started cri-containerd-f2bb573b187febce6d7de99cb2810c95ac599878d2856ffd73f2718bab9efdb9.scope - libcontainer container f2bb573b187febce6d7de99cb2810c95ac599878d2856ffd73f2718bab9efdb9. Jan 17 00:19:55.353510 containerd[1481]: time="2026-01-17T00:19:55.346430726Z" level=info msg="StartContainer for \"c25286e46c523e8344851e2f9ed27466247b340e97f176f07c947396530204c7\" returns successfully" Jan 17 00:19:55.424915 containerd[1481]: time="2026-01-17T00:19:55.424445569Z" level=info msg="StartContainer for \"4b838f96d454ba05ed24cce4bc327ce6ee5fe7fde03661317623008252003d81\" returns successfully" Jan 17 00:19:55.443210 containerd[1481]: time="2026-01-17T00:19:55.443154253Z" level=info msg="StartContainer for \"f2bb573b187febce6d7de99cb2810c95ac599878d2856ffd73f2718bab9efdb9\" returns successfully" Jan 17 00:19:55.445035 kubelet[2151]: E0117 00:19:55.444979 2151 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://165.232.147.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 165.232.147.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:19:55.544473 kubelet[2151]: E0117 00:19:55.544127 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:55.544737 kubelet[2151]: E0117 00:19:55.544545 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:55.550199 kubelet[2151]: E0117 00:19:55.550148 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:55.550431 kubelet[2151]: E0117 00:19:55.550387 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:55.552438 kubelet[2151]: E0117 00:19:55.552081 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:55.553035 kubelet[2151]: E0117 00:19:55.552768 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:56.557111 kubelet[2151]: E0117 00:19:56.556782 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:56.557111 kubelet[2151]: E0117 00:19:56.556992 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:56.558123 kubelet[2151]: E0117 00:19:56.557521 2151 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:56.558123 kubelet[2151]: E0117 00:19:56.557685 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:56.696099 kubelet[2151]: I0117 00:19:56.695167 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.617830 kubelet[2151]: E0117 00:19:58.617736 2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-8d0945b27f\" not found" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.715284 kubelet[2151]: I0117 00:19:58.715198 2151 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.715284 kubelet[2151]: E0117 00:19:58.715284 2151 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-8d0945b27f\": node \"ci-4081.3.6-n-8d0945b27f\" not found" Jan 17 00:19:58.799057 kubelet[2151]: I0117 00:19:58.798987 2151 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.808776 kubelet[2151]: E0117 00:19:58.808717 2151 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.808776 kubelet[2151]: I0117 00:19:58.808780 2151 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.810667 kubelet[2151]: E0117 00:19:58.810618 2151 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8d0945b27f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.810667 kubelet[2151]: I0117 00:19:58.810663 2151 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:58.814278 kubelet[2151]: E0117 00:19:58.813575 2151 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:59.340521 kubelet[2151]: I0117 00:19:59.340472 2151 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:59.343105 kubelet[2151]: E0117 00:19:59.343032 2151 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8d0945b27f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" Jan 17 00:19:59.343454 kubelet[2151]: E0117 00:19:59.343429 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:19:59.470332 kubelet[2151]: I0117 00:19:59.470276 2151 apiserver.go:52] "Watching apiserver" Jan 17 00:19:59.497860 kubelet[2151]: I0117 00:19:59.497808 2151 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:20:00.642509 kubelet[2151]: I0117 00:20:00.642458 2151 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:00.654575 kubelet[2151]: I0117 00:20:00.654505 2151 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:00.655842 kubelet[2151]: E0117 00:20:00.655635 2151 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:01.080024 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Jan 17 00:20:01.080053 systemd[1]: Reloading... Jan 17 00:20:01.201354 zram_generator::config[2484]: No configuration found. Jan 17 00:20:01.355921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:20:01.480030 systemd[1]: Reloading finished in 399 ms. Jan 17 00:20:01.533643 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:20:01.544818 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:20:01.545057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:20:01.545125 systemd[1]: kubelet.service: Consumed 1.370s CPU time, 124.2M memory peak, 0B memory swap peak. Jan 17 00:20:01.553224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:20:01.794129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:20:01.812910 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:20:01.940225 kubelet[2532]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:20:01.940225 kubelet[2532]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:20:01.941100 kubelet[2532]: I0117 00:20:01.940298 2532 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:20:01.966433 kubelet[2532]: I0117 00:20:01.965276 2532 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:20:01.966433 kubelet[2532]: I0117 00:20:01.965328 2532 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:20:01.966433 kubelet[2532]: I0117 00:20:01.965387 2532 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:20:01.966433 kubelet[2532]: I0117 00:20:01.965416 2532 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:20:01.966433 kubelet[2532]: I0117 00:20:01.965802 2532 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:20:01.968910 kubelet[2532]: I0117 00:20:01.968861 2532 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:20:01.974761 kubelet[2532]: I0117 00:20:01.974684 2532 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:20:01.987518 kubelet[2532]: E0117 00:20:01.987349 2532 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:20:01.987851 kubelet[2532]: I0117 00:20:01.987824 2532 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:20:01.994037 kubelet[2532]: I0117 00:20:01.993984 2532 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:20:02.008211 kubelet[2532]: I0117 00:20:02.007646 2532 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:20:02.008629 kubelet[2532]: I0117 00:20:02.008030 2532 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8d0945b27f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:20:02.008629 kubelet[2532]: I0117 00:20:02.008380 2532 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:20:02.008629 kubelet[2532]: I0117 00:20:02.008401 2532 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:20:02.008629 kubelet[2532]: I0117 00:20:02.008462 2532 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:20:02.013108 kubelet[2532]: I0117 00:20:02.012989 2532 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:20:02.013501 kubelet[2532]: I0117 00:20:02.013462 2532 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:20:02.014187 kubelet[2532]: I0117 00:20:02.014156 2532 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:20:02.014377 kubelet[2532]: I0117 00:20:02.014207 2532 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:20:02.014377 kubelet[2532]: I0117 00:20:02.014234 2532 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:20:02.039296 kubelet[2532]: I0117 00:20:02.039070 2532 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:20:02.041361 kubelet[2532]: I0117 00:20:02.041181 2532 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:20:02.041361 kubelet[2532]: I0117 00:20:02.041291 2532 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:20:02.070395 kubelet[2532]: I0117 00:20:02.067737 2532 server.go:1262] "Started kubelet" Jan 17 00:20:02.072759 kubelet[2532]: I0117 00:20:02.072462 2532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:20:02.075769 kubelet[2532]: I0117 00:20:02.074572 2532 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:20:02.080530 kubelet[2532]: I0117 00:20:02.079969 2532 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:20:02.090516 kubelet[2532]: I0117 00:20:02.090159 2532 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:20:02.090516 kubelet[2532]: I0117 00:20:02.090241 2532 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:20:02.099271 kubelet[2532]: I0117 00:20:02.099180 2532 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:20:02.099477 kubelet[2532]: I0117 00:20:02.092804 2532 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:20:02.101313 kubelet[2532]: I0117 00:20:02.099633 2532 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:20:02.107270 kubelet[2532]: I0117 00:20:02.107183 2532 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:20:02.109005 kubelet[2532]: I0117 00:20:02.107446 2532 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:20:02.121151 kubelet[2532]: E0117 00:20:02.117583 2532 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:20:02.121151 kubelet[2532]: I0117 00:20:02.117664 2532 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:20:02.121151 kubelet[2532]: I0117 00:20:02.117860 2532 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:20:02.132458 kubelet[2532]: I0117 00:20:02.132348 2532 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:20:02.154505 kubelet[2532]: I0117 00:20:02.154451 2532 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:20:02.157936 kubelet[2532]: I0117 00:20:02.157891 2532 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:20:02.157936 kubelet[2532]: I0117 00:20:02.157929 2532 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:20:02.158114 kubelet[2532]: I0117 00:20:02.157973 2532 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:20:02.158114 kubelet[2532]: E0117 00:20:02.158032 2532 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:20:02.245365 kubelet[2532]: I0117 00:20:02.244574 2532 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:20:02.245365 kubelet[2532]: I0117 00:20:02.245323 2532 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:20:02.245365 kubelet[2532]: I0117 00:20:02.245375 2532 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:20:02.245592 kubelet[2532]: I0117 00:20:02.245569 2532 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:20:02.245592 kubelet[2532]: I0117 00:20:02.245580 2532 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:20:02.245640 kubelet[2532]: I0117 00:20:02.245599 2532 policy_none.go:49] "None policy: Start" Jan 17 00:20:02.245640 kubelet[2532]: I0117 00:20:02.245612 2532 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:20:02.245640 kubelet[2532]: I0117 00:20:02.245622 2532 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:20:02.245736 kubelet[2532]: I0117 00:20:02.245717 2532 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:20:02.245736 kubelet[2532]: I0117 00:20:02.245734 2532 policy_none.go:47] "Start" Jan 17 00:20:02.257850 kubelet[2532]: E0117 00:20:02.257772 2532 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:20:02.259740 kubelet[2532]: I0117 00:20:02.259567 2532 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:20:02.259740 kubelet[2532]: I0117 00:20:02.259596 2532 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:20:02.262109 kubelet[2532]: I0117 00:20:02.260171 2532 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:20:02.264359 kubelet[2532]: I0117 00:20:02.264044 2532 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.271707 kubelet[2532]: I0117 00:20:02.270471 2532 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.276225 kubelet[2532]: I0117 00:20:02.276041 2532 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.277686 kubelet[2532]: E0117 00:20:02.277162 2532 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:20:02.297378 kubelet[2532]: I0117 00:20:02.297158 2532 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:02.300685 kubelet[2532]: I0117 00:20:02.298192 2532 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:02.300685 kubelet[2532]: E0117 00:20:02.299610 2532 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.303435 kubelet[2532]: I0117 00:20:02.302321 2532 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:02.310741 kubelet[2532]: I0117 00:20:02.309793 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.310741 kubelet[2532]: I0117 00:20:02.309862 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.310741 kubelet[2532]: I0117 00:20:02.309903 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4131dcfbec40939c356ae8e91311312-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" (UID: \"f4131dcfbec40939c356ae8e91311312\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.310741 kubelet[2532]: I0117 00:20:02.309953 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4131dcfbec40939c356ae8e91311312-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" (UID: \"f4131dcfbec40939c356ae8e91311312\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.310741 kubelet[2532]: I0117 00:20:02.309976 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.311111 kubelet[2532]: I0117 00:20:02.310006 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a82a9a470c8eb467796f5b995f6b7025-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8d0945b27f\" (UID: \"a82a9a470c8eb467796f5b995f6b7025\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.311111 kubelet[2532]: I0117 00:20:02.310036 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4131dcfbec40939c356ae8e91311312-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" (UID: \"f4131dcfbec40939c356ae8e91311312\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.311111 kubelet[2532]: I0117 00:20:02.310057 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.311111 kubelet[2532]: I0117 00:20:02.310072 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09ed2b5e98fbb02e7d27edc39fd3d7c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8d0945b27f\" (UID: \"09ed2b5e98fbb02e7d27edc39fd3d7c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.383431 kubelet[2532]: I0117 00:20:02.381822 2532 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.406703 kubelet[2532]: I0117 00:20:02.406642 2532 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.406920 kubelet[2532]: I0117 00:20:02.406798 2532 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:02.600600 kubelet[2532]: E0117 00:20:02.600164 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:02.600793 kubelet[2532]: E0117 00:20:02.600711 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:02.605197 kubelet[2532]: E0117 00:20:02.604791 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:03.034034 kubelet[2532]: I0117 00:20:03.032679 2532 apiserver.go:52] "Watching apiserver" Jan 17 00:20:03.107381 kubelet[2532]: I0117 00:20:03.107330 2532 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:20:03.201689 kubelet[2532]: I0117 00:20:03.201619 2532 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:03.202313 kubelet[2532]: E0117 00:20:03.202068 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:03.202588 kubelet[2532]: E0117 00:20:03.202406 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:03.223175 kubelet[2532]: I0117 00:20:03.222613 2532 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:03.223175 kubelet[2532]: E0117 00:20:03.222696 2532 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8d0945b27f\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:03.223175 kubelet[2532]: E0117 00:20:03.223003 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:03.278209 kubelet[2532]: I0117 00:20:03.278120 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8d0945b27f" podStartSLOduration=3.275438722 podStartE2EDuration="3.275438722s" podCreationTimestamp="2026-01-17 00:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:03.257569018 +0000 UTC m=+1.437260326" watchObservedRunningTime="2026-01-17 00:20:03.275438722 +0000 UTC m=+1.455130000" Jan 17 00:20:03.319947 kubelet[2532]: I0117 00:20:03.318992 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8d0945b27f" podStartSLOduration=1.3189746740000001 podStartE2EDuration="1.318974674s" podCreationTimestamp="2026-01-17 00:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:03.278637557 +0000 UTC m=+1.458328882" watchObservedRunningTime="2026-01-17 00:20:03.318974674 +0000 UTC m=+1.498665971" Jan 17 00:20:04.204557 kubelet[2532]: E0117 00:20:04.203883 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:04.204557 kubelet[2532]: E0117 00:20:04.204534 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:05.206186 kubelet[2532]: E0117 00:20:05.206088 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:06.363607 kubelet[2532]: E0117 00:20:06.363541 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:06.416987 kubelet[2532]: I0117 00:20:06.416889 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8d0945b27f" podStartSLOduration=4.41685522 podStartE2EDuration="4.41685522s" podCreationTimestamp="2026-01-17 00:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:03.320916545 +0000 UTC m=+1.500607839" watchObservedRunningTime="2026-01-17 00:20:06.41685522 +0000 UTC m=+4.596546524" Jan 17 00:20:06.565850 kubelet[2532]: I0117 00:20:06.565807 2532 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:20:06.566469 containerd[1481]: time="2026-01-17T00:20:06.566287406Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:20:06.567023 kubelet[2532]: I0117 00:20:06.566764 2532 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:20:07.211807 kubelet[2532]: E0117 00:20:07.211737 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:07.260743 systemd[1]: Created slice kubepods-besteffort-poded3d9866_91a3_42a2_8ad8_6e11059b8ce2.slice - libcontainer container kubepods-besteffort-poded3d9866_91a3_42a2_8ad8_6e11059b8ce2.slice. Jan 17 00:20:07.292427 update_engine[1458]: I20260117 00:20:07.291536 1458 update_attempter.cc:509] Updating boot flags... Jan 17 00:20:07.343934 kubelet[2532]: I0117 00:20:07.343386 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed3d9866-91a3-42a2-8ad8-6e11059b8ce2-xtables-lock\") pod \"kube-proxy-8xpgd\" (UID: \"ed3d9866-91a3-42a2-8ad8-6e11059b8ce2\") " pod="kube-system/kube-proxy-8xpgd" Jan 17 00:20:07.343934 kubelet[2532]: I0117 00:20:07.343440 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nd7s\" (UniqueName: \"kubernetes.io/projected/ed3d9866-91a3-42a2-8ad8-6e11059b8ce2-kube-api-access-9nd7s\") pod \"kube-proxy-8xpgd\" (UID: \"ed3d9866-91a3-42a2-8ad8-6e11059b8ce2\") " pod="kube-system/kube-proxy-8xpgd" Jan 17 00:20:07.343934 kubelet[2532]: I0117 00:20:07.343467 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed3d9866-91a3-42a2-8ad8-6e11059b8ce2-kube-proxy\") pod \"kube-proxy-8xpgd\" (UID: \"ed3d9866-91a3-42a2-8ad8-6e11059b8ce2\") " pod="kube-system/kube-proxy-8xpgd" Jan 17 00:20:07.343934 kubelet[2532]: I0117 00:20:07.343482 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed3d9866-91a3-42a2-8ad8-6e11059b8ce2-lib-modules\") pod \"kube-proxy-8xpgd\" (UID: \"ed3d9866-91a3-42a2-8ad8-6e11059b8ce2\") " pod="kube-system/kube-proxy-8xpgd" Jan 17 00:20:07.360293 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2589) Jan 17 00:20:07.458583 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2588) Jan 17 00:20:07.581570 kubelet[2532]: E0117 00:20:07.581448 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:07.587853 containerd[1481]: time="2026-01-17T00:20:07.587375357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xpgd,Uid:ed3d9866-91a3-42a2-8ad8-6e11059b8ce2,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:07.623567 containerd[1481]: time="2026-01-17T00:20:07.622778750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:07.623567 containerd[1481]: time="2026-01-17T00:20:07.622850086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:07.623567 containerd[1481]: time="2026-01-17T00:20:07.622882677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:07.623567 containerd[1481]: time="2026-01-17T00:20:07.623070256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:07.649994 systemd[1]: run-containerd-runc-k8s.io-a33c2143b450906f396c7637f795d28d9bb05962fa3e9c5addd95d58d4606a4a-runc.K2Xig1.mount: Deactivated successfully. Jan 17 00:20:07.659613 systemd[1]: Started cri-containerd-a33c2143b450906f396c7637f795d28d9bb05962fa3e9c5addd95d58d4606a4a.scope - libcontainer container a33c2143b450906f396c7637f795d28d9bb05962fa3e9c5addd95d58d4606a4a. Jan 17 00:20:07.715321 containerd[1481]: time="2026-01-17T00:20:07.715190630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xpgd,Uid:ed3d9866-91a3-42a2-8ad8-6e11059b8ce2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a33c2143b450906f396c7637f795d28d9bb05962fa3e9c5addd95d58d4606a4a\"" Jan 17 00:20:07.718288 kubelet[2532]: E0117 00:20:07.717425 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:07.729464 containerd[1481]: time="2026-01-17T00:20:07.729406023Z" level=info msg="CreateContainer within sandbox \"a33c2143b450906f396c7637f795d28d9bb05962fa3e9c5addd95d58d4606a4a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:20:07.756081 containerd[1481]: time="2026-01-17T00:20:07.756029950Z" level=info msg="CreateContainer within sandbox \"a33c2143b450906f396c7637f795d28d9bb05962fa3e9c5addd95d58d4606a4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65493cf3ce652c21fa2494ba71a73cc2e373fc8fbacb2b70db3fa5837583c653\"" Jan 17 00:20:07.757485 containerd[1481]: time="2026-01-17T00:20:07.757371412Z" level=info msg="StartContainer for \"65493cf3ce652c21fa2494ba71a73cc2e373fc8fbacb2b70db3fa5837583c653\"" Jan 17 00:20:07.788504 systemd[1]: Created slice kubepods-besteffort-pod278b48c0_ed3b_40a6_9649_009283983248.slice - libcontainer container kubepods-besteffort-pod278b48c0_ed3b_40a6_9649_009283983248.slice. Jan 17 00:20:07.819563 systemd[1]: Started cri-containerd-65493cf3ce652c21fa2494ba71a73cc2e373fc8fbacb2b70db3fa5837583c653.scope - libcontainer container 65493cf3ce652c21fa2494ba71a73cc2e373fc8fbacb2b70db3fa5837583c653. Jan 17 00:20:07.854241 containerd[1481]: time="2026-01-17T00:20:07.854002062Z" level=info msg="StartContainer for \"65493cf3ce652c21fa2494ba71a73cc2e373fc8fbacb2b70db3fa5837583c653\" returns successfully" Jan 17 00:20:07.946545 kubelet[2532]: I0117 00:20:07.946428 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hl7q\" (UniqueName: \"kubernetes.io/projected/278b48c0-ed3b-40a6-9649-009283983248-kube-api-access-9hl7q\") pod \"tigera-operator-65cdcdfd6d-6jz5d\" (UID: \"278b48c0-ed3b-40a6-9649-009283983248\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-6jz5d" Jan 17 00:20:07.946545 kubelet[2532]: I0117 00:20:07.946512 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/278b48c0-ed3b-40a6-9649-009283983248-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-6jz5d\" (UID: \"278b48c0-ed3b-40a6-9649-009283983248\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-6jz5d" Jan 17 00:20:08.117430 containerd[1481]: time="2026-01-17T00:20:08.117369551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-6jz5d,Uid:278b48c0-ed3b-40a6-9649-009283983248,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:20:08.151775 containerd[1481]: time="2026-01-17T00:20:08.151642966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:08.152089 containerd[1481]: time="2026-01-17T00:20:08.151962546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:08.152089 containerd[1481]: time="2026-01-17T00:20:08.152028021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:08.152574 containerd[1481]: time="2026-01-17T00:20:08.152314133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:08.192531 systemd[1]: Started cri-containerd-a588b6478c6c8b3ee9d65fd8897de3f6cbef78158d2a1fcbb9e8de1ec978556b.scope - libcontainer container a588b6478c6c8b3ee9d65fd8897de3f6cbef78158d2a1fcbb9e8de1ec978556b. Jan 17 00:20:08.236968 kubelet[2532]: E0117 00:20:08.235008 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:08.239816 kubelet[2532]: E0117 00:20:08.237975 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:08.283609 kubelet[2532]: I0117 00:20:08.282077 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8xpgd" podStartSLOduration=1.273571323 podStartE2EDuration="1.273571323s" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:08.2717168 +0000 UTC m=+6.451408102" watchObservedRunningTime="2026-01-17 00:20:08.273571323 +0000 UTC m=+6.453262623" Jan 17 00:20:08.329622 containerd[1481]: time="2026-01-17T00:20:08.329455999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-6jz5d,Uid:278b48c0-ed3b-40a6-9649-009283983248,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a588b6478c6c8b3ee9d65fd8897de3f6cbef78158d2a1fcbb9e8de1ec978556b\"" Jan 17 00:20:08.335275 containerd[1481]: time="2026-01-17T00:20:08.334845758Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:20:09.198961 kubelet[2532]: E0117 00:20:09.198906 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:09.239086 kubelet[2532]: E0117 00:20:09.238990 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:09.485029 kubelet[2532]: E0117 00:20:09.484842 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:10.137008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360720975.mount: Deactivated successfully. Jan 17 00:20:10.237446 kubelet[2532]: E0117 00:20:10.236780 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:10.237446 kubelet[2532]: E0117 00:20:10.237166 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:11.503867 containerd[1481]: time="2026-01-17T00:20:11.502419421Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:11.503867 containerd[1481]: time="2026-01-17T00:20:11.503793081Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:20:11.507303 containerd[1481]: time="2026-01-17T00:20:11.506129446Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:11.509711 containerd[1481]: time="2026-01-17T00:20:11.509666047Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:11.510978 containerd[1481]: time="2026-01-17T00:20:11.510404850Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.175517494s" Jan 17 00:20:11.511705 containerd[1481]: time="2026-01-17T00:20:11.511676267Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:20:11.519700 containerd[1481]: time="2026-01-17T00:20:11.519633117Z" level=info msg="CreateContainer within sandbox \"a588b6478c6c8b3ee9d65fd8897de3f6cbef78158d2a1fcbb9e8de1ec978556b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:20:11.535823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382972166.mount: Deactivated successfully. Jan 17 00:20:11.540347 containerd[1481]: time="2026-01-17T00:20:11.540199081Z" level=info msg="CreateContainer within sandbox \"a588b6478c6c8b3ee9d65fd8897de3f6cbef78158d2a1fcbb9e8de1ec978556b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c89362fdb9632e76aa034ff64770d28b4d3478eb2a25afe63f8b3bf32ea0772a\"" Jan 17 00:20:11.542147 containerd[1481]: time="2026-01-17T00:20:11.541844357Z" level=info msg="StartContainer for \"c89362fdb9632e76aa034ff64770d28b4d3478eb2a25afe63f8b3bf32ea0772a\"" Jan 17 00:20:11.583883 systemd[1]: run-containerd-runc-k8s.io-c89362fdb9632e76aa034ff64770d28b4d3478eb2a25afe63f8b3bf32ea0772a-runc.Jz5u1o.mount: Deactivated successfully. Jan 17 00:20:11.601624 systemd[1]: Started cri-containerd-c89362fdb9632e76aa034ff64770d28b4d3478eb2a25afe63f8b3bf32ea0772a.scope - libcontainer container c89362fdb9632e76aa034ff64770d28b4d3478eb2a25afe63f8b3bf32ea0772a. Jan 17 00:20:11.635450 containerd[1481]: time="2026-01-17T00:20:11.635328303Z" level=info msg="StartContainer for \"c89362fdb9632e76aa034ff64770d28b4d3478eb2a25afe63f8b3bf32ea0772a\" returns successfully" Jan 17 00:20:18.751546 sudo[1655]: pam_unix(sudo:session): session closed for user root Jan 17 00:20:18.822684 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:18.833855 systemd[1]: sshd@6-165.232.147.124:22-4.153.228.146:45976.service: Deactivated successfully. Jan 17 00:20:18.836704 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:20:18.837180 systemd[1]: session-7.scope: Consumed 7.004s CPU time, 149.4M memory peak, 0B memory swap peak. Jan 17 00:20:18.839986 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:20:18.842140 systemd-logind[1452]: Removed session 7. Jan 17 00:20:25.556130 kubelet[2532]: I0117 00:20:25.556021 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-6jz5d" podStartSLOduration=15.375730216000001 podStartE2EDuration="18.556004003s" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="2026-01-17 00:20:08.332578644 +0000 UTC m=+6.512269939" lastFinishedPulling="2026-01-17 00:20:11.512852441 +0000 UTC m=+9.692543726" observedRunningTime="2026-01-17 00:20:12.25743471 +0000 UTC m=+10.437126028" watchObservedRunningTime="2026-01-17 00:20:25.556004003 +0000 UTC m=+23.735695306" Jan 17 00:20:25.570902 kubelet[2532]: I0117 00:20:25.570804 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/24589046-497a-4266-a5c7-2c597e1a5c4d-typha-certs\") pod \"calico-typha-575f495d56-j4tqr\" (UID: \"24589046-497a-4266-a5c7-2c597e1a5c4d\") " pod="calico-system/calico-typha-575f495d56-j4tqr" Jan 17 00:20:25.570902 kubelet[2532]: I0117 00:20:25.570861 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxlk\" (UniqueName: \"kubernetes.io/projected/24589046-497a-4266-a5c7-2c597e1a5c4d-kube-api-access-dqxlk\") pod \"calico-typha-575f495d56-j4tqr\" (UID: \"24589046-497a-4266-a5c7-2c597e1a5c4d\") " pod="calico-system/calico-typha-575f495d56-j4tqr" Jan 17 00:20:25.572450 kubelet[2532]: I0117 00:20:25.571415 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24589046-497a-4266-a5c7-2c597e1a5c4d-tigera-ca-bundle\") pod \"calico-typha-575f495d56-j4tqr\" (UID: \"24589046-497a-4266-a5c7-2c597e1a5c4d\") " pod="calico-system/calico-typha-575f495d56-j4tqr" Jan 17 00:20:25.573379 systemd[1]: Created slice kubepods-besteffort-pod24589046_497a_4266_a5c7_2c597e1a5c4d.slice - libcontainer container kubepods-besteffort-pod24589046_497a_4266_a5c7_2c597e1a5c4d.slice. Jan 17 00:20:25.765090 systemd[1]: Created slice kubepods-besteffort-pod2612179c_1896_4e4e_90f3_5ffd4ba9a397.slice - libcontainer container kubepods-besteffort-pod2612179c_1896_4e4e_90f3_5ffd4ba9a397.slice. Jan 17 00:20:25.776722 kubelet[2532]: I0117 00:20:25.776412 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-cni-net-dir\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.776722 kubelet[2532]: I0117 00:20:25.776495 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-flexvol-driver-host\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.776722 kubelet[2532]: I0117 00:20:25.776532 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2612179c-1896-4e4e-90f3-5ffd4ba9a397-node-certs\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.776722 kubelet[2532]: I0117 00:20:25.776587 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-var-lib-calico\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.776722 kubelet[2532]: I0117 00:20:25.776638 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-lib-modules\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.777101 kubelet[2532]: I0117 00:20:25.776660 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-policysync\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.777607 kubelet[2532]: I0117 00:20:25.776693 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-xtables-lock\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.777607 kubelet[2532]: I0117 00:20:25.777308 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2612179c-1896-4e4e-90f3-5ffd4ba9a397-tigera-ca-bundle\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.777607 kubelet[2532]: I0117 00:20:25.777404 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-cni-log-dir\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.777607 kubelet[2532]: I0117 00:20:25.777433 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsgz6\" (UniqueName: \"kubernetes.io/projected/2612179c-1896-4e4e-90f3-5ffd4ba9a397-kube-api-access-qsgz6\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.777607 kubelet[2532]: I0117 00:20:25.777507 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-cni-bin-dir\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.777923 kubelet[2532]: I0117 00:20:25.777554 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2612179c-1896-4e4e-90f3-5ffd4ba9a397-var-run-calico\") pod \"calico-node-fh2bp\" (UID: \"2612179c-1896-4e4e-90f3-5ffd4ba9a397\") " pod="calico-system/calico-node-fh2bp" Jan 17 00:20:25.883765 kubelet[2532]: E0117 00:20:25.883711 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.884129 kubelet[2532]: W0117 00:20:25.883983 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.884129 kubelet[2532]: E0117 00:20:25.884048 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.885569 kubelet[2532]: E0117 00:20:25.885540 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.885830 kubelet[2532]: W0117 00:20:25.885712 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.885830 kubelet[2532]: E0117 00:20:25.885743 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.886207 kubelet[2532]: E0117 00:20:25.886163 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.886207 kubelet[2532]: W0117 00:20:25.886176 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.887288 kubelet[2532]: E0117 00:20:25.886190 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.887642 kubelet[2532]: E0117 00:20:25.887628 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.887808 kubelet[2532]: W0117 00:20:25.887706 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.887808 kubelet[2532]: E0117 00:20:25.887726 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.888194 kubelet[2532]: E0117 00:20:25.888085 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.888194 kubelet[2532]: W0117 00:20:25.888126 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.888194 kubelet[2532]: E0117 00:20:25.888148 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.889696 kubelet[2532]: E0117 00:20:25.889502 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.889696 kubelet[2532]: W0117 00:20:25.889539 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.889696 kubelet[2532]: E0117 00:20:25.889556 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.890776 kubelet[2532]: E0117 00:20:25.890506 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.890776 kubelet[2532]: W0117 00:20:25.890521 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.890776 kubelet[2532]: E0117 00:20:25.890534 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.892465 kubelet[2532]: E0117 00:20:25.892377 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:25.892765 kubelet[2532]: E0117 00:20:25.892614 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.892765 kubelet[2532]: W0117 00:20:25.892646 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.892765 kubelet[2532]: E0117 00:20:25.892661 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.895815 containerd[1481]: time="2026-01-17T00:20:25.893719890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575f495d56-j4tqr,Uid:24589046-497a-4266-a5c7-2c597e1a5c4d,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:25.898603 kubelet[2532]: E0117 00:20:25.895495 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.898603 kubelet[2532]: W0117 00:20:25.895515 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.898603 kubelet[2532]: E0117 00:20:25.895741 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.901557 kubelet[2532]: E0117 00:20:25.899652 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.901557 kubelet[2532]: W0117 00:20:25.899785 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.901557 kubelet[2532]: E0117 00:20:25.899815 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.903597 kubelet[2532]: E0117 00:20:25.903571 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.904076 kubelet[2532]: W0117 00:20:25.903867 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.904076 kubelet[2532]: E0117 00:20:25.903921 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.904874 kubelet[2532]: E0117 00:20:25.904820 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.904874 kubelet[2532]: W0117 00:20:25.904855 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.904874 kubelet[2532]: E0117 00:20:25.904871 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.907031 kubelet[2532]: E0117 00:20:25.906905 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.907356 kubelet[2532]: W0117 00:20:25.907317 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.907426 kubelet[2532]: E0117 00:20:25.907374 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.911298 kubelet[2532]: E0117 00:20:25.909694 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.911298 kubelet[2532]: W0117 00:20:25.909734 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.911298 kubelet[2532]: E0117 00:20:25.909754 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.911298 kubelet[2532]: E0117 00:20:25.910079 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.911298 kubelet[2532]: W0117 00:20:25.910124 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.911298 kubelet[2532]: E0117 00:20:25.910136 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.911298 kubelet[2532]: E0117 00:20:25.910433 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.911298 kubelet[2532]: W0117 00:20:25.910445 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.911298 kubelet[2532]: E0117 00:20:25.910456 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.911298 kubelet[2532]: E0117 00:20:25.910696 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.911631 kubelet[2532]: W0117 00:20:25.910704 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.911631 kubelet[2532]: E0117 00:20:25.910714 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.919491 kubelet[2532]: E0117 00:20:25.919447 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.919491 kubelet[2532]: W0117 00:20:25.919476 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.919491 kubelet[2532]: E0117 00:20:25.919501 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.971506 containerd[1481]: time="2026-01-17T00:20:25.970181801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:25.971506 containerd[1481]: time="2026-01-17T00:20:25.970875502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:25.971506 containerd[1481]: time="2026-01-17T00:20:25.970907473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:25.971506 containerd[1481]: time="2026-01-17T00:20:25.971036569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:26.011731 systemd[1]: Started cri-containerd-3d16d596d5457bf1760cad6e9c35562e21a0a0cb6def60c9073501ef1b1a25f3.scope - libcontainer container 3d16d596d5457bf1760cad6e9c35562e21a0a0cb6def60c9073501ef1b1a25f3. Jan 17 00:20:26.038467 kubelet[2532]: E0117 00:20:26.037940 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:26.068633 kubelet[2532]: E0117 00:20:26.068313 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.068633 kubelet[2532]: W0117 00:20:26.068342 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.068633 kubelet[2532]: E0117 00:20:26.068367 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.068633 kubelet[2532]: E0117 00:20:26.068543 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.068633 kubelet[2532]: W0117 00:20:26.068550 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.068633 kubelet[2532]: E0117 00:20:26.068558 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.069725 kubelet[2532]: E0117 00:20:26.068713 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.069725 kubelet[2532]: W0117 00:20:26.068720 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.069725 kubelet[2532]: E0117 00:20:26.068727 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.069859 kubelet[2532]: E0117 00:20:26.069834 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.069859 kubelet[2532]: W0117 00:20:26.069853 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.069967 kubelet[2532]: E0117 00:20:26.069876 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.070902 kubelet[2532]: E0117 00:20:26.070827 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.070902 kubelet[2532]: W0117 00:20:26.070851 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.070902 kubelet[2532]: E0117 00:20:26.070874 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.071445 kubelet[2532]: E0117 00:20:26.071309 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.071445 kubelet[2532]: W0117 00:20:26.071322 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.071445 kubelet[2532]: E0117 00:20:26.071335 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.072185 kubelet[2532]: E0117 00:20:26.072101 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.072185 kubelet[2532]: W0117 00:20:26.072128 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.072185 kubelet[2532]: E0117 00:20:26.072140 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.072599 kubelet[2532]: E0117 00:20:26.072536 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.072599 kubelet[2532]: W0117 00:20:26.072549 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.072681 kubelet[2532]: E0117 00:20:26.072660 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.073140 kubelet[2532]: E0117 00:20:26.073121 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.073140 kubelet[2532]: W0117 00:20:26.073176 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.073140 kubelet[2532]: E0117 00:20:26.073194 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.074004 kubelet[2532]: E0117 00:20:26.073895 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.074004 kubelet[2532]: W0117 00:20:26.073922 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.074004 kubelet[2532]: E0117 00:20:26.073934 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.075211 kubelet[2532]: E0117 00:20:26.075190 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.075211 kubelet[2532]: W0117 00:20:26.075211 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.075472 kubelet[2532]: E0117 00:20:26.075226 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.075696 kubelet[2532]: E0117 00:20:26.075661 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.075696 kubelet[2532]: W0117 00:20:26.075675 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.075696 kubelet[2532]: E0117 00:20:26.075687 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.076684 kubelet[2532]: E0117 00:20:26.076665 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.076684 kubelet[2532]: W0117 00:20:26.076682 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.076782 kubelet[2532]: E0117 00:20:26.076695 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.077192 kubelet[2532]: E0117 00:20:26.077169 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.077192 kubelet[2532]: W0117 00:20:26.077184 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.077192 kubelet[2532]: E0117 00:20:26.077195 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.077977 kubelet[2532]: E0117 00:20:26.077957 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.077977 kubelet[2532]: W0117 00:20:26.077975 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.077977 kubelet[2532]: E0117 00:20:26.077989 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.078852 kubelet[2532]: E0117 00:20:26.078831 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.078852 kubelet[2532]: W0117 00:20:26.078853 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.078932 kubelet[2532]: E0117 00:20:26.078865 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.079171 kubelet[2532]: E0117 00:20:26.079152 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:26.080236 kubelet[2532]: E0117 00:20:26.080212 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.080236 kubelet[2532]: W0117 00:20:26.080230 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.080385 kubelet[2532]: E0117 00:20:26.080246 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.081177 containerd[1481]: time="2026-01-17T00:20:26.080626883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fh2bp,Uid:2612179c-1896-4e4e-90f3-5ffd4ba9a397,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:26.081800 kubelet[2532]: E0117 00:20:26.081777 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.081800 kubelet[2532]: W0117 00:20:26.081797 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.081894 kubelet[2532]: E0117 00:20:26.081811 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.082210 kubelet[2532]: E0117 00:20:26.082183 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.082210 kubelet[2532]: W0117 00:20:26.082207 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.082461 kubelet[2532]: E0117 00:20:26.082219 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.082461 kubelet[2532]: E0117 00:20:26.082417 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.082461 kubelet[2532]: W0117 00:20:26.082425 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.082461 kubelet[2532]: E0117 00:20:26.082433 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.083514 kubelet[2532]: E0117 00:20:26.083489 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.083514 kubelet[2532]: W0117 00:20:26.083508 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.083633 kubelet[2532]: E0117 00:20:26.083524 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.083633 kubelet[2532]: I0117 00:20:26.083565 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddbls\" (UniqueName: \"kubernetes.io/projected/785ca1fd-8ad2-4e63-be23-ced8350e2045-kube-api-access-ddbls\") pod \"csi-node-driver-pbn95\" (UID: \"785ca1fd-8ad2-4e63-be23-ced8350e2045\") " pod="calico-system/csi-node-driver-pbn95" Jan 17 00:20:26.084322 kubelet[2532]: E0117 00:20:26.083796 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.084322 kubelet[2532]: W0117 00:20:26.083831 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.084322 kubelet[2532]: E0117 00:20:26.083842 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.084322 kubelet[2532]: I0117 00:20:26.083871 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/785ca1fd-8ad2-4e63-be23-ced8350e2045-socket-dir\") pod \"csi-node-driver-pbn95\" (UID: \"785ca1fd-8ad2-4e63-be23-ced8350e2045\") " pod="calico-system/csi-node-driver-pbn95" Jan 17 00:20:26.084672 kubelet[2532]: E0117 00:20:26.084552 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.084672 kubelet[2532]: W0117 00:20:26.084563 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.084672 kubelet[2532]: E0117 00:20:26.084574 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.085241 kubelet[2532]: E0117 00:20:26.084881 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.085241 kubelet[2532]: W0117 00:20:26.084890 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.085241 kubelet[2532]: E0117 00:20:26.084904 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.085241 kubelet[2532]: E0117 00:20:26.085158 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.085241 kubelet[2532]: W0117 00:20:26.085166 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.085241 kubelet[2532]: E0117 00:20:26.085176 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.085241 kubelet[2532]: I0117 00:20:26.085203 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/785ca1fd-8ad2-4e63-be23-ced8350e2045-kubelet-dir\") pod \"csi-node-driver-pbn95\" (UID: \"785ca1fd-8ad2-4e63-be23-ced8350e2045\") " pod="calico-system/csi-node-driver-pbn95" Jan 17 00:20:26.086194 kubelet[2532]: E0117 00:20:26.085631 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.086194 kubelet[2532]: W0117 00:20:26.085645 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.086194 kubelet[2532]: E0117 00:20:26.085681 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.086194 kubelet[2532]: I0117 00:20:26.085701 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/785ca1fd-8ad2-4e63-be23-ced8350e2045-registration-dir\") pod \"csi-node-driver-pbn95\" (UID: \"785ca1fd-8ad2-4e63-be23-ced8350e2045\") " pod="calico-system/csi-node-driver-pbn95" Jan 17 00:20:26.086514 kubelet[2532]: E0117 00:20:26.086283 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.086514 kubelet[2532]: W0117 00:20:26.086296 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.086514 kubelet[2532]: E0117 00:20:26.086306 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.086514 kubelet[2532]: I0117 00:20:26.086327 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/785ca1fd-8ad2-4e63-be23-ced8350e2045-varrun\") pod \"csi-node-driver-pbn95\" (UID: \"785ca1fd-8ad2-4e63-be23-ced8350e2045\") " pod="calico-system/csi-node-driver-pbn95" Jan 17 00:20:26.087052 kubelet[2532]: E0117 00:20:26.087032 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.087121 kubelet[2532]: W0117 00:20:26.087059 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.087121 kubelet[2532]: E0117 00:20:26.087071 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.087779 kubelet[2532]: E0117 00:20:26.087442 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.087779 kubelet[2532]: W0117 00:20:26.087456 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.087779 kubelet[2532]: E0117 00:20:26.087466 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.088034 kubelet[2532]: E0117 00:20:26.088015 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.088034 kubelet[2532]: W0117 00:20:26.088032 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.088166 kubelet[2532]: E0117 00:20:26.088043 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.088743 kubelet[2532]: E0117 00:20:26.088721 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.088743 kubelet[2532]: W0117 00:20:26.088739 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.088963 kubelet[2532]: E0117 00:20:26.088753 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.089433 kubelet[2532]: E0117 00:20:26.089388 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.089433 kubelet[2532]: W0117 00:20:26.089409 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.089433 kubelet[2532]: E0117 00:20:26.089424 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.090333 kubelet[2532]: E0117 00:20:26.090285 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.090333 kubelet[2532]: W0117 00:20:26.090296 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.090333 kubelet[2532]: E0117 00:20:26.090308 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.091333 kubelet[2532]: E0117 00:20:26.091312 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.091333 kubelet[2532]: W0117 00:20:26.091329 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.092002 kubelet[2532]: E0117 00:20:26.091344 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.092002 kubelet[2532]: E0117 00:20:26.091613 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.092002 kubelet[2532]: W0117 00:20:26.091626 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.092002 kubelet[2532]: E0117 00:20:26.091642 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.154462 containerd[1481]: time="2026-01-17T00:20:26.148214200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:26.154462 containerd[1481]: time="2026-01-17T00:20:26.150345101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:26.154462 containerd[1481]: time="2026-01-17T00:20:26.150359868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:26.154462 containerd[1481]: time="2026-01-17T00:20:26.151379804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:26.187216 kubelet[2532]: E0117 00:20:26.186992 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.187216 kubelet[2532]: W0117 00:20:26.187036 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.187216 kubelet[2532]: E0117 00:20:26.187065 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.187743 kubelet[2532]: E0117 00:20:26.187378 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.187743 kubelet[2532]: W0117 00:20:26.187389 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.187743 kubelet[2532]: E0117 00:20:26.187399 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.187345 systemd[1]: Started cri-containerd-522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0.scope - libcontainer container 522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0. Jan 17 00:20:26.188461 kubelet[2532]: E0117 00:20:26.188355 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.188461 kubelet[2532]: W0117 00:20:26.188378 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.188461 kubelet[2532]: E0117 00:20:26.188393 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.188994 kubelet[2532]: E0117 00:20:26.188595 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.188994 kubelet[2532]: W0117 00:20:26.188608 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.188994 kubelet[2532]: E0117 00:20:26.188616 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.190217 kubelet[2532]: E0117 00:20:26.189410 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.190217 kubelet[2532]: W0117 00:20:26.189428 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.190217 kubelet[2532]: E0117 00:20:26.189603 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.190217 kubelet[2532]: E0117 00:20:26.190220 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.190389 kubelet[2532]: W0117 00:20:26.190233 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.190389 kubelet[2532]: E0117 00:20:26.190247 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.190761 kubelet[2532]: E0117 00:20:26.190745 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.191269 kubelet[2532]: W0117 00:20:26.190762 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.191337 kubelet[2532]: E0117 00:20:26.191289 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.192234 kubelet[2532]: E0117 00:20:26.192211 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.192234 kubelet[2532]: W0117 00:20:26.192230 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.193844 kubelet[2532]: E0117 00:20:26.192244 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.193844 kubelet[2532]: E0117 00:20:26.193436 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.193844 kubelet[2532]: W0117 00:20:26.193452 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.193844 kubelet[2532]: E0117 00:20:26.193483 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.194916 kubelet[2532]: E0117 00:20:26.194882 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.194981 kubelet[2532]: W0117 00:20:26.194919 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.194981 kubelet[2532]: E0117 00:20:26.194970 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.196221 kubelet[2532]: E0117 00:20:26.195810 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.196221 kubelet[2532]: W0117 00:20:26.195826 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.196221 kubelet[2532]: E0117 00:20:26.195838 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.197444 kubelet[2532]: E0117 00:20:26.197353 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.197444 kubelet[2532]: W0117 00:20:26.197371 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.197444 kubelet[2532]: E0117 00:20:26.197385 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.197828 kubelet[2532]: E0117 00:20:26.197689 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.197828 kubelet[2532]: W0117 00:20:26.197703 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.197828 kubelet[2532]: E0117 00:20:26.197713 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.198621 kubelet[2532]: E0117 00:20:26.198446 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.198621 kubelet[2532]: W0117 00:20:26.198463 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.198621 kubelet[2532]: E0117 00:20:26.198475 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.199353 kubelet[2532]: E0117 00:20:26.199335 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.200420 kubelet[2532]: W0117 00:20:26.199535 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.200420 kubelet[2532]: E0117 00:20:26.199589 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.200986 kubelet[2532]: E0117 00:20:26.200968 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.200986 kubelet[2532]: W0117 00:20:26.200984 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.201114 kubelet[2532]: E0117 00:20:26.200996 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.202453 kubelet[2532]: E0117 00:20:26.202434 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.202453 kubelet[2532]: W0117 00:20:26.202451 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.202588 kubelet[2532]: E0117 00:20:26.202465 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.202843 kubelet[2532]: E0117 00:20:26.202825 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.202843 kubelet[2532]: W0117 00:20:26.202841 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.202992 kubelet[2532]: E0117 00:20:26.202853 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.204366 kubelet[2532]: E0117 00:20:26.204344 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.204366 kubelet[2532]: W0117 00:20:26.204362 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.204508 kubelet[2532]: E0117 00:20:26.204376 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.204763 kubelet[2532]: E0117 00:20:26.204747 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.204763 kubelet[2532]: W0117 00:20:26.204760 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.204763 kubelet[2532]: E0117 00:20:26.204770 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.205562 kubelet[2532]: E0117 00:20:26.205529 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.205562 kubelet[2532]: W0117 00:20:26.205546 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.205562 kubelet[2532]: E0117 00:20:26.205558 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.206850 kubelet[2532]: E0117 00:20:26.206512 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.206850 kubelet[2532]: W0117 00:20:26.206527 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.206850 kubelet[2532]: E0117 00:20:26.206540 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.209297 kubelet[2532]: E0117 00:20:26.207747 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.209297 kubelet[2532]: W0117 00:20:26.207763 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.209297 kubelet[2532]: E0117 00:20:26.207778 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.209297 kubelet[2532]: E0117 00:20:26.207975 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.209297 kubelet[2532]: W0117 00:20:26.207982 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.209297 kubelet[2532]: E0117 00:20:26.207990 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.209297 kubelet[2532]: E0117 00:20:26.208149 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.209297 kubelet[2532]: W0117 00:20:26.208156 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.209297 kubelet[2532]: E0117 00:20:26.208163 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.240933 containerd[1481]: time="2026-01-17T00:20:26.240887933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575f495d56-j4tqr,Uid:24589046-497a-4266-a5c7-2c597e1a5c4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d16d596d5457bf1760cad6e9c35562e21a0a0cb6def60c9073501ef1b1a25f3\"" Jan 17 00:20:26.241106 kubelet[2532]: E0117 00:20:26.241081 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.241280 kubelet[2532]: W0117 00:20:26.241215 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.241358 kubelet[2532]: E0117 00:20:26.241246 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.243164 kubelet[2532]: E0117 00:20:26.242771 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:26.245066 containerd[1481]: time="2026-01-17T00:20:26.244958894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:20:26.292649 containerd[1481]: time="2026-01-17T00:20:26.292410670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fh2bp,Uid:2612179c-1896-4e4e-90f3-5ffd4ba9a397,Namespace:calico-system,Attempt:0,} returns sandbox id \"522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0\"" Jan 17 00:20:26.296301 kubelet[2532]: E0117 00:20:26.295170 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:27.719166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476356005.mount: Deactivated successfully. Jan 17 00:20:28.162601 kubelet[2532]: E0117 00:20:28.162456 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:28.935442 containerd[1481]: time="2026-01-17T00:20:28.935365691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:28.936274 containerd[1481]: time="2026-01-17T00:20:28.936209505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:20:28.938021 containerd[1481]: time="2026-01-17T00:20:28.936864482Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:28.938948 containerd[1481]: time="2026-01-17T00:20:28.938653091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:28.939366 containerd[1481]: time="2026-01-17T00:20:28.939341669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.694331951s" Jan 17 00:20:28.939423 containerd[1481]: time="2026-01-17T00:20:28.939378713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:20:28.940765 containerd[1481]: time="2026-01-17T00:20:28.940744567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:20:28.987626 containerd[1481]: time="2026-01-17T00:20:28.987570708Z" level=info msg="CreateContainer within sandbox \"3d16d596d5457bf1760cad6e9c35562e21a0a0cb6def60c9073501ef1b1a25f3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:20:29.007839 containerd[1481]: time="2026-01-17T00:20:29.007630955Z" level=info msg="CreateContainer within sandbox \"3d16d596d5457bf1760cad6e9c35562e21a0a0cb6def60c9073501ef1b1a25f3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0de9f02a75c151e6090dec593d7f787d66e42feac5c99da655fbc0df716f37c4\"" Jan 17 00:20:29.008700 containerd[1481]: time="2026-01-17T00:20:29.008652230Z" level=info msg="StartContainer for \"0de9f02a75c151e6090dec593d7f787d66e42feac5c99da655fbc0df716f37c4\"" Jan 17 00:20:29.123581 systemd[1]: Started cri-containerd-0de9f02a75c151e6090dec593d7f787d66e42feac5c99da655fbc0df716f37c4.scope - libcontainer container 0de9f02a75c151e6090dec593d7f787d66e42feac5c99da655fbc0df716f37c4. Jan 17 00:20:29.192767 containerd[1481]: time="2026-01-17T00:20:29.192596033Z" level=info msg="StartContainer for \"0de9f02a75c151e6090dec593d7f787d66e42feac5c99da655fbc0df716f37c4\" returns successfully" Jan 17 00:20:29.309568 kubelet[2532]: E0117 00:20:29.307979 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:29.407734 kubelet[2532]: E0117 00:20:29.407561 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.407734 kubelet[2532]: W0117 00:20:29.407591 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.407734 kubelet[2532]: E0117 00:20:29.407616 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.409066 kubelet[2532]: E0117 00:20:29.407881 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.409066 kubelet[2532]: W0117 00:20:29.407890 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.409066 kubelet[2532]: E0117 00:20:29.407904 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.410514 kubelet[2532]: E0117 00:20:29.409361 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.410514 kubelet[2532]: W0117 00:20:29.409382 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.410514 kubelet[2532]: E0117 00:20:29.409399 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.410949 kubelet[2532]: E0117 00:20:29.410778 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.410949 kubelet[2532]: W0117 00:20:29.410798 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.410949 kubelet[2532]: E0117 00:20:29.410817 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.412110 kubelet[2532]: E0117 00:20:29.412029 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.412110 kubelet[2532]: W0117 00:20:29.412045 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.412110 kubelet[2532]: E0117 00:20:29.412062 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.412588 kubelet[2532]: E0117 00:20:29.412436 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.412588 kubelet[2532]: W0117 00:20:29.412446 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.412588 kubelet[2532]: E0117 00:20:29.412457 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.412824 kubelet[2532]: E0117 00:20:29.412728 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.412824 kubelet[2532]: W0117 00:20:29.412738 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.412824 kubelet[2532]: E0117 00:20:29.412747 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.413687 kubelet[2532]: E0117 00:20:29.413180 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.413687 kubelet[2532]: W0117 00:20:29.413575 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.413687 kubelet[2532]: E0117 00:20:29.413602 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.414817 kubelet[2532]: E0117 00:20:29.414670 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.414817 kubelet[2532]: W0117 00:20:29.414688 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.414817 kubelet[2532]: E0117 00:20:29.414703 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.415917 kubelet[2532]: E0117 00:20:29.415806 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.415917 kubelet[2532]: W0117 00:20:29.415822 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.415917 kubelet[2532]: E0117 00:20:29.415837 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.418789 kubelet[2532]: E0117 00:20:29.418513 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.418789 kubelet[2532]: W0117 00:20:29.418530 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.418789 kubelet[2532]: E0117 00:20:29.418554 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.422605 kubelet[2532]: E0117 00:20:29.422578 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.422852 kubelet[2532]: W0117 00:20:29.422824 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.423044 kubelet[2532]: E0117 00:20:29.423028 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.423968 kubelet[2532]: E0117 00:20:29.423951 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.424369 kubelet[2532]: W0117 00:20:29.424217 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.424369 kubelet[2532]: E0117 00:20:29.424287 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.427301 kubelet[2532]: E0117 00:20:29.426380 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.427610 kubelet[2532]: W0117 00:20:29.426405 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.427610 kubelet[2532]: E0117 00:20:29.427461 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.428065 kubelet[2532]: E0117 00:20:29.427761 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.428065 kubelet[2532]: W0117 00:20:29.427774 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.428065 kubelet[2532]: E0117 00:20:29.427786 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.430168 kubelet[2532]: E0117 00:20:29.429646 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.430168 kubelet[2532]: W0117 00:20:29.429665 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.430168 kubelet[2532]: E0117 00:20:29.429683 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.431056 kubelet[2532]: E0117 00:20:29.430495 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.431056 kubelet[2532]: W0117 00:20:29.430511 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.431056 kubelet[2532]: E0117 00:20:29.430525 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.433437 kubelet[2532]: E0117 00:20:29.432708 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.433437 kubelet[2532]: W0117 00:20:29.432822 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.433437 kubelet[2532]: E0117 00:20:29.432842 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.433709 kubelet[2532]: E0117 00:20:29.433606 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.433709 kubelet[2532]: W0117 00:20:29.433621 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.433709 kubelet[2532]: E0117 00:20:29.433640 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.437104 kubelet[2532]: E0117 00:20:29.436845 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.437104 kubelet[2532]: W0117 00:20:29.436868 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.437104 kubelet[2532]: E0117 00:20:29.436889 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.437596 kubelet[2532]: E0117 00:20:29.437578 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.438221 kubelet[2532]: W0117 00:20:29.437953 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.438221 kubelet[2532]: E0117 00:20:29.438134 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.438847 kubelet[2532]: E0117 00:20:29.438589 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.438847 kubelet[2532]: W0117 00:20:29.438607 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.438847 kubelet[2532]: E0117 00:20:29.438621 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.439493 kubelet[2532]: E0117 00:20:29.439358 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.439493 kubelet[2532]: W0117 00:20:29.439372 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.439493 kubelet[2532]: E0117 00:20:29.439390 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.440404 kubelet[2532]: E0117 00:20:29.440389 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.441311 kubelet[2532]: W0117 00:20:29.440470 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.441311 kubelet[2532]: E0117 00:20:29.440485 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.441785 kubelet[2532]: E0117 00:20:29.441680 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.441785 kubelet[2532]: W0117 00:20:29.441694 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.441785 kubelet[2532]: E0117 00:20:29.441706 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.442122 kubelet[2532]: E0117 00:20:29.442071 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.442122 kubelet[2532]: W0117 00:20:29.442086 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.442122 kubelet[2532]: E0117 00:20:29.442103 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.444117 kubelet[2532]: E0117 00:20:29.443882 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.444117 kubelet[2532]: W0117 00:20:29.443900 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.444117 kubelet[2532]: E0117 00:20:29.443913 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.446095 kubelet[2532]: E0117 00:20:29.444565 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.446095 kubelet[2532]: W0117 00:20:29.444577 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.446095 kubelet[2532]: E0117 00:20:29.444590 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.447780 kubelet[2532]: E0117 00:20:29.447068 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.448001 kubelet[2532]: W0117 00:20:29.447894 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.448001 kubelet[2532]: E0117 00:20:29.447921 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.450084 kubelet[2532]: E0117 00:20:29.448925 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.450084 kubelet[2532]: W0117 00:20:29.448940 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.450084 kubelet[2532]: E0117 00:20:29.448956 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.450908 kubelet[2532]: E0117 00:20:29.450773 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.450908 kubelet[2532]: W0117 00:20:29.450800 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.450908 kubelet[2532]: E0117 00:20:29.450818 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.453294 kubelet[2532]: E0117 00:20:29.452103 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.453294 kubelet[2532]: W0117 00:20:29.452116 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.453294 kubelet[2532]: E0117 00:20:29.452131 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:29.453728 kubelet[2532]: E0117 00:20:29.453664 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:29.453728 kubelet[2532]: W0117 00:20:29.453679 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:29.453728 kubelet[2532]: E0117 00:20:29.453693 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.159671 kubelet[2532]: E0117 00:20:30.159572 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:30.313677 kubelet[2532]: I0117 00:20:30.311825 2532 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:30.313677 kubelet[2532]: E0117 00:20:30.312478 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:30.333959 kubelet[2532]: E0117 00:20:30.333908 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.333959 kubelet[2532]: W0117 00:20:30.333951 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.333959 kubelet[2532]: E0117 00:20:30.333987 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.334714 kubelet[2532]: E0117 00:20:30.334357 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.334714 kubelet[2532]: W0117 00:20:30.334373 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.334714 kubelet[2532]: E0117 00:20:30.334391 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.334867 kubelet[2532]: E0117 00:20:30.334746 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.334867 kubelet[2532]: W0117 00:20:30.334761 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.334867 kubelet[2532]: E0117 00:20:30.334778 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.335087 kubelet[2532]: E0117 00:20:30.335068 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.335161 kubelet[2532]: W0117 00:20:30.335086 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.335161 kubelet[2532]: E0117 00:20:30.335126 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.335491 kubelet[2532]: E0117 00:20:30.335472 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.335547 kubelet[2532]: W0117 00:20:30.335492 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.335547 kubelet[2532]: E0117 00:20:30.335508 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.335806 kubelet[2532]: E0117 00:20:30.335772 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.335806 kubelet[2532]: W0117 00:20:30.335791 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.335806 kubelet[2532]: E0117 00:20:30.335805 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.336120 kubelet[2532]: E0117 00:20:30.336074 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.336120 kubelet[2532]: W0117 00:20:30.336087 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.336120 kubelet[2532]: E0117 00:20:30.336101 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.336435 kubelet[2532]: E0117 00:20:30.336414 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.336435 kubelet[2532]: W0117 00:20:30.336434 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.336578 kubelet[2532]: E0117 00:20:30.336452 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.336790 kubelet[2532]: E0117 00:20:30.336766 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.336790 kubelet[2532]: W0117 00:20:30.336787 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.336925 kubelet[2532]: E0117 00:20:30.336801 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.337095 kubelet[2532]: E0117 00:20:30.337062 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.337095 kubelet[2532]: W0117 00:20:30.337074 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.337095 kubelet[2532]: E0117 00:20:30.337092 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.337610 kubelet[2532]: E0117 00:20:30.337559 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.337610 kubelet[2532]: W0117 00:20:30.337608 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.337736 kubelet[2532]: E0117 00:20:30.337625 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.338014 kubelet[2532]: E0117 00:20:30.337986 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.338063 kubelet[2532]: W0117 00:20:30.338032 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.338063 kubelet[2532]: E0117 00:20:30.338049 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.338454 kubelet[2532]: E0117 00:20:30.338422 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.338454 kubelet[2532]: W0117 00:20:30.338440 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.338454 kubelet[2532]: E0117 00:20:30.338455 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.339522 kubelet[2532]: E0117 00:20:30.339499 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.339522 kubelet[2532]: W0117 00:20:30.339520 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.340028 kubelet[2532]: E0117 00:20:30.339537 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.340028 kubelet[2532]: E0117 00:20:30.339836 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.340028 kubelet[2532]: W0117 00:20:30.339849 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.340028 kubelet[2532]: E0117 00:20:30.339863 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.343700 kubelet[2532]: E0117 00:20:30.343489 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.343700 kubelet[2532]: W0117 00:20:30.343519 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.343700 kubelet[2532]: E0117 00:20:30.343544 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.344235 kubelet[2532]: E0117 00:20:30.344064 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.344235 kubelet[2532]: W0117 00:20:30.344078 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.344235 kubelet[2532]: E0117 00:20:30.344094 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.344738 kubelet[2532]: E0117 00:20:30.344702 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.344738 kubelet[2532]: W0117 00:20:30.344731 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.344815 kubelet[2532]: E0117 00:20:30.344753 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.345150 kubelet[2532]: E0117 00:20:30.345131 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.345150 kubelet[2532]: W0117 00:20:30.345149 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.345247 kubelet[2532]: E0117 00:20:30.345163 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.345819 kubelet[2532]: E0117 00:20:30.345709 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.345819 kubelet[2532]: W0117 00:20:30.345743 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.345819 kubelet[2532]: E0117 00:20:30.345763 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.346301 kubelet[2532]: E0117 00:20:30.346099 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.346301 kubelet[2532]: W0117 00:20:30.346147 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.346301 kubelet[2532]: E0117 00:20:30.346166 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.347423 kubelet[2532]: E0117 00:20:30.346825 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.347423 kubelet[2532]: W0117 00:20:30.346861 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.347423 kubelet[2532]: E0117 00:20:30.346879 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.347423 kubelet[2532]: E0117 00:20:30.347234 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.347423 kubelet[2532]: W0117 00:20:30.347246 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.347423 kubelet[2532]: E0117 00:20:30.347298 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.349121 kubelet[2532]: E0117 00:20:30.349045 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.349121 kubelet[2532]: W0117 00:20:30.349077 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.349121 kubelet[2532]: E0117 00:20:30.349098 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.349918 kubelet[2532]: E0117 00:20:30.349892 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.349918 kubelet[2532]: W0117 00:20:30.349915 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.350013 kubelet[2532]: E0117 00:20:30.349933 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.350658 kubelet[2532]: E0117 00:20:30.350612 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.350720 kubelet[2532]: W0117 00:20:30.350659 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.350720 kubelet[2532]: E0117 00:20:30.350677 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.351625 kubelet[2532]: E0117 00:20:30.351600 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.351625 kubelet[2532]: W0117 00:20:30.351623 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.351738 kubelet[2532]: E0117 00:20:30.351639 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.354660 kubelet[2532]: E0117 00:20:30.354426 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.354660 kubelet[2532]: W0117 00:20:30.354502 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.354660 kubelet[2532]: E0117 00:20:30.354558 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.355298 kubelet[2532]: E0117 00:20:30.355088 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.355298 kubelet[2532]: W0117 00:20:30.355126 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.355298 kubelet[2532]: E0117 00:20:30.355147 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.356870 kubelet[2532]: E0117 00:20:30.356699 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.356870 kubelet[2532]: W0117 00:20:30.356721 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.356870 kubelet[2532]: E0117 00:20:30.356739 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.357899 kubelet[2532]: E0117 00:20:30.357696 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.357899 kubelet[2532]: W0117 00:20:30.357715 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.357899 kubelet[2532]: E0117 00:20:30.357735 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.358132 kubelet[2532]: E0117 00:20:30.358119 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.358184 kubelet[2532]: W0117 00:20:30.358174 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.358227 kubelet[2532]: E0117 00:20:30.358218 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.359086 kubelet[2532]: E0117 00:20:30.359065 2532 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:30.359299 kubelet[2532]: W0117 00:20:30.359274 2532 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:30.359568 kubelet[2532]: E0117 00:20:30.359520 2532 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:30.482801 containerd[1481]: time="2026-01-17T00:20:30.482642510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:30.486949 containerd[1481]: time="2026-01-17T00:20:30.486442134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:20:30.489114 containerd[1481]: time="2026-01-17T00:20:30.487859485Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:30.490939 containerd[1481]: time="2026-01-17T00:20:30.490431491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.549550575s" Jan 17 00:20:30.490939 containerd[1481]: time="2026-01-17T00:20:30.490490142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:20:30.490939 containerd[1481]: time="2026-01-17T00:20:30.490569645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:30.496489 containerd[1481]: time="2026-01-17T00:20:30.496432328Z" level=info msg="CreateContainer within sandbox \"522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:20:30.516243 containerd[1481]: time="2026-01-17T00:20:30.516093411Z" level=info msg="CreateContainer within sandbox \"522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3\"" Jan 17 00:20:30.517479 containerd[1481]: time="2026-01-17T00:20:30.517168419Z" level=info msg="StartContainer for \"dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3\"" Jan 17 00:20:30.577630 systemd[1]: Started cri-containerd-dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3.scope - libcontainer container dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3. Jan 17 00:20:30.624793 containerd[1481]: time="2026-01-17T00:20:30.624371677Z" level=info msg="StartContainer for \"dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3\" returns successfully" Jan 17 00:20:30.639955 systemd[1]: cri-containerd-dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3.scope: Deactivated successfully. Jan 17 00:20:30.680433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3-rootfs.mount: Deactivated successfully. Jan 17 00:20:30.727843 containerd[1481]: time="2026-01-17T00:20:30.691017638Z" level=info msg="shim disconnected" id=dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3 namespace=k8s.io Jan 17 00:20:30.727843 containerd[1481]: time="2026-01-17T00:20:30.727380208Z" level=warning msg="cleaning up after shim disconnected" id=dc39494d5f85a9370a3e36efeae937abf7616ea4c290acc8ec004db00a92a1a3 namespace=k8s.io Jan 17 00:20:30.727843 containerd[1481]: time="2026-01-17T00:20:30.727407699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:31.316707 kubelet[2532]: E0117 00:20:31.316566 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:31.320448 containerd[1481]: time="2026-01-17T00:20:31.319396443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:20:31.338641 kubelet[2532]: I0117 00:20:31.338552 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-575f495d56-j4tqr" podStartSLOduration=3.642361547 podStartE2EDuration="6.338521891s" podCreationTimestamp="2026-01-17 00:20:25 +0000 UTC" firstStartedPulling="2026-01-17 00:20:26.244432304 +0000 UTC m=+24.424123582" lastFinishedPulling="2026-01-17 00:20:28.940592648 +0000 UTC m=+27.120283926" observedRunningTime="2026-01-17 00:20:29.338863392 +0000 UTC m=+27.518554697" watchObservedRunningTime="2026-01-17 00:20:31.338521891 +0000 UTC m=+29.518213209" Jan 17 00:20:32.159881 kubelet[2532]: E0117 00:20:32.159792 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:33.884278 kubelet[2532]: I0117 00:20:33.884069 2532 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:33.884842 kubelet[2532]: E0117 00:20:33.884569 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:34.159795 kubelet[2532]: E0117 00:20:34.159668 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:34.325439 kubelet[2532]: E0117 00:20:34.324876 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:35.979147 containerd[1481]: time="2026-01-17T00:20:35.978157160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:35.980044 containerd[1481]: time="2026-01-17T00:20:35.980000917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:20:35.981199 containerd[1481]: time="2026-01-17T00:20:35.981149695Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:35.983657 containerd[1481]: time="2026-01-17T00:20:35.983619737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:35.984474 containerd[1481]: time="2026-01-17T00:20:35.984440075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.664982589s" Jan 17 00:20:35.984553 containerd[1481]: time="2026-01-17T00:20:35.984477622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:20:35.990510 containerd[1481]: time="2026-01-17T00:20:35.990462124Z" level=info msg="CreateContainer within sandbox \"522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:20:36.038313 containerd[1481]: time="2026-01-17T00:20:36.038198096Z" level=info msg="CreateContainer within sandbox \"522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b\"" Jan 17 00:20:36.041319 containerd[1481]: time="2026-01-17T00:20:36.040625165Z" level=info msg="StartContainer for \"4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b\"" Jan 17 00:20:36.117569 systemd[1]: Started cri-containerd-4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b.scope - libcontainer container 4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b. Jan 17 00:20:36.152570 containerd[1481]: time="2026-01-17T00:20:36.152516535Z" level=info msg="StartContainer for \"4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b\" returns successfully" Jan 17 00:20:36.164238 kubelet[2532]: E0117 00:20:36.164172 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:36.337808 kubelet[2532]: E0117 00:20:36.335816 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:36.895036 systemd[1]: cri-containerd-4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b.scope: Deactivated successfully. Jan 17 00:20:36.920578 kubelet[2532]: I0117 00:20:36.919395 2532 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:20:36.933871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b-rootfs.mount: Deactivated successfully. Jan 17 00:20:36.992858 containerd[1481]: time="2026-01-17T00:20:36.991218346Z" level=info msg="shim disconnected" id=4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b namespace=k8s.io Jan 17 00:20:36.992858 containerd[1481]: time="2026-01-17T00:20:36.991330999Z" level=warning msg="cleaning up after shim disconnected" id=4297abb83f9ba1027309bf2c78533fb383994cc5249af0138091880bdd991a6b namespace=k8s.io Jan 17 00:20:36.992858 containerd[1481]: time="2026-01-17T00:20:36.991384095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:37.031362 systemd[1]: Created slice kubepods-burstable-pod3b5332e7_48f1_496f_8faa_d235dc24f5d8.slice - libcontainer container kubepods-burstable-pod3b5332e7_48f1_496f_8faa_d235dc24f5d8.slice. Jan 17 00:20:37.054652 containerd[1481]: time="2026-01-17T00:20:37.054043245Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:20:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:20:37.060107 systemd[1]: Created slice kubepods-burstable-poda20c7c12_692e_4309_b8a8_a42052435b98.slice - libcontainer container kubepods-burstable-poda20c7c12_692e_4309_b8a8_a42052435b98.slice. Jan 17 00:20:37.095104 kubelet[2532]: I0117 00:20:37.094461 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c98a2f-bcb2-4019-8b39-98c736ccd677-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-m249n\" (UID: \"a3c98a2f-bcb2-4019-8b39-98c736ccd677\") " pod="calico-system/goldmane-7c778bb748-m249n" Jan 17 00:20:37.095104 kubelet[2532]: I0117 00:20:37.094504 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-ca-bundle\") pod \"whisker-559c9bff5c-pgg2b\" (UID: \"1837a900-7d52-485d-8ce9-13ccc023b76c\") " pod="calico-system/whisker-559c9bff5c-pgg2b" Jan 17 00:20:37.095104 kubelet[2532]: I0117 00:20:37.094526 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75ed374b-4149-4248-8b00-b1cb0ceb9572-tigera-ca-bundle\") pod \"calico-kube-controllers-67d567dbb6-j2ffv\" (UID: \"75ed374b-4149-4248-8b00-b1cb0ceb9572\") " pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" Jan 17 00:20:37.095104 kubelet[2532]: I0117 00:20:37.094546 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6b7z\" (UniqueName: \"kubernetes.io/projected/b34f9844-1f24-4158-8f3c-e8308ca5c340-kube-api-access-g6b7z\") pod \"calico-apiserver-599ddd4698-s5fxr\" (UID: \"b34f9844-1f24-4158-8f3c-e8308ca5c340\") " pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" Jan 17 00:20:37.095104 kubelet[2532]: I0117 00:20:37.094562 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c98a2f-bcb2-4019-8b39-98c736ccd677-config\") pod \"goldmane-7c778bb748-m249n\" (UID: \"a3c98a2f-bcb2-4019-8b39-98c736ccd677\") " pod="calico-system/goldmane-7c778bb748-m249n" Jan 17 00:20:37.095414 kubelet[2532]: I0117 00:20:37.094586 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a20c7c12-692e-4309-b8a8-a42052435b98-config-volume\") pod \"coredns-66bc5c9577-ntxlc\" (UID: \"a20c7c12-692e-4309-b8a8-a42052435b98\") " pod="kube-system/coredns-66bc5c9577-ntxlc" Jan 17 00:20:37.095414 kubelet[2532]: I0117 00:20:37.094625 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7twm7\" (UniqueName: \"kubernetes.io/projected/a20c7c12-692e-4309-b8a8-a42052435b98-kube-api-access-7twm7\") pod \"coredns-66bc5c9577-ntxlc\" (UID: \"a20c7c12-692e-4309-b8a8-a42052435b98\") " pod="kube-system/coredns-66bc5c9577-ntxlc" Jan 17 00:20:37.095414 kubelet[2532]: I0117 00:20:37.094651 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-backend-key-pair\") pod \"whisker-559c9bff5c-pgg2b\" (UID: \"1837a900-7d52-485d-8ce9-13ccc023b76c\") " pod="calico-system/whisker-559c9bff5c-pgg2b" Jan 17 00:20:37.095414 kubelet[2532]: I0117 00:20:37.094666 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b5332e7-48f1-496f-8faa-d235dc24f5d8-config-volume\") pod \"coredns-66bc5c9577-fqpfh\" (UID: \"3b5332e7-48f1-496f-8faa-d235dc24f5d8\") " pod="kube-system/coredns-66bc5c9577-fqpfh" Jan 17 00:20:37.095414 kubelet[2532]: I0117 00:20:37.094762 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8svz\" (UniqueName: \"kubernetes.io/projected/a3c98a2f-bcb2-4019-8b39-98c736ccd677-kube-api-access-j8svz\") pod \"goldmane-7c778bb748-m249n\" (UID: \"a3c98a2f-bcb2-4019-8b39-98c736ccd677\") " pod="calico-system/goldmane-7c778bb748-m249n" Jan 17 00:20:37.095126 systemd[1]: Created slice kubepods-besteffort-pod7cd0417c_a83c_4bf0_96f8_9680bbeb055b.slice - libcontainer container kubepods-besteffort-pod7cd0417c_a83c_4bf0_96f8_9680bbeb055b.slice. Jan 17 00:20:37.095615 kubelet[2532]: I0117 00:20:37.094786 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4689k\" (UniqueName: \"kubernetes.io/projected/75ed374b-4149-4248-8b00-b1cb0ceb9572-kube-api-access-4689k\") pod \"calico-kube-controllers-67d567dbb6-j2ffv\" (UID: \"75ed374b-4149-4248-8b00-b1cb0ceb9572\") " pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" Jan 17 00:20:37.095615 kubelet[2532]: I0117 00:20:37.094823 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a3c98a2f-bcb2-4019-8b39-98c736ccd677-goldmane-key-pair\") pod \"goldmane-7c778bb748-m249n\" (UID: \"a3c98a2f-bcb2-4019-8b39-98c736ccd677\") " pod="calico-system/goldmane-7c778bb748-m249n" Jan 17 00:20:37.095615 kubelet[2532]: I0117 00:20:37.094848 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2746n\" (UniqueName: \"kubernetes.io/projected/1837a900-7d52-485d-8ce9-13ccc023b76c-kube-api-access-2746n\") pod \"whisker-559c9bff5c-pgg2b\" (UID: \"1837a900-7d52-485d-8ce9-13ccc023b76c\") " pod="calico-system/whisker-559c9bff5c-pgg2b" Jan 17 00:20:37.095615 kubelet[2532]: I0117 00:20:37.094907 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpj4h\" (UniqueName: \"kubernetes.io/projected/7cd0417c-a83c-4bf0-96f8-9680bbeb055b-kube-api-access-hpj4h\") pod \"calico-apiserver-599ddd4698-rsvbr\" (UID: \"7cd0417c-a83c-4bf0-96f8-9680bbeb055b\") " pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" Jan 17 00:20:37.095615 kubelet[2532]: I0117 00:20:37.094935 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcrv\" (UniqueName: \"kubernetes.io/projected/3b5332e7-48f1-496f-8faa-d235dc24f5d8-kube-api-access-ldcrv\") pod \"coredns-66bc5c9577-fqpfh\" (UID: \"3b5332e7-48f1-496f-8faa-d235dc24f5d8\") " pod="kube-system/coredns-66bc5c9577-fqpfh" Jan 17 00:20:37.095792 kubelet[2532]: I0117 00:20:37.094966 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cd0417c-a83c-4bf0-96f8-9680bbeb055b-calico-apiserver-certs\") pod \"calico-apiserver-599ddd4698-rsvbr\" (UID: \"7cd0417c-a83c-4bf0-96f8-9680bbeb055b\") " pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" Jan 17 00:20:37.095792 kubelet[2532]: I0117 00:20:37.094988 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b34f9844-1f24-4158-8f3c-e8308ca5c340-calico-apiserver-certs\") pod \"calico-apiserver-599ddd4698-s5fxr\" (UID: \"b34f9844-1f24-4158-8f3c-e8308ca5c340\") " pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" Jan 17 00:20:37.109181 systemd[1]: Created slice kubepods-besteffort-pod75ed374b_4149_4248_8b00_b1cb0ceb9572.slice - libcontainer container kubepods-besteffort-pod75ed374b_4149_4248_8b00_b1cb0ceb9572.slice. Jan 17 00:20:37.120124 systemd[1]: Created slice kubepods-besteffort-podb34f9844_1f24_4158_8f3c_e8308ca5c340.slice - libcontainer container kubepods-besteffort-podb34f9844_1f24_4158_8f3c_e8308ca5c340.slice. Jan 17 00:20:37.127372 systemd[1]: Created slice kubepods-besteffort-pod1837a900_7d52_485d_8ce9_13ccc023b76c.slice - libcontainer container kubepods-besteffort-pod1837a900_7d52_485d_8ce9_13ccc023b76c.slice. Jan 17 00:20:37.139050 systemd[1]: Created slice kubepods-besteffort-poda3c98a2f_bcb2_4019_8b39_98c736ccd677.slice - libcontainer container kubepods-besteffort-poda3c98a2f_bcb2_4019_8b39_98c736ccd677.slice. Jan 17 00:20:37.341486 kubelet[2532]: E0117 00:20:37.341360 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:37.346168 containerd[1481]: time="2026-01-17T00:20:37.345526592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:20:37.351588 kubelet[2532]: E0117 00:20:37.351545 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:37.352166 containerd[1481]: time="2026-01-17T00:20:37.352105518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqpfh,Uid:3b5332e7-48f1-496f-8faa-d235dc24f5d8,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:37.395223 kubelet[2532]: E0117 00:20:37.392600 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:37.395498 containerd[1481]: time="2026-01-17T00:20:37.394629144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ntxlc,Uid:a20c7c12-692e-4309-b8a8-a42052435b98,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:37.429709 containerd[1481]: time="2026-01-17T00:20:37.429665090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-s5fxr,Uid:b34f9844-1f24-4158-8f3c-e8308ca5c340,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:20:37.430633 containerd[1481]: time="2026-01-17T00:20:37.430465879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-rsvbr,Uid:7cd0417c-a83c-4bf0-96f8-9680bbeb055b,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:20:37.431204 containerd[1481]: time="2026-01-17T00:20:37.430523606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d567dbb6-j2ffv,Uid:75ed374b-4149-4248-8b00-b1cb0ceb9572,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:37.434010 containerd[1481]: time="2026-01-17T00:20:37.433970562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-559c9bff5c-pgg2b,Uid:1837a900-7d52-485d-8ce9-13ccc023b76c,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:37.458189 containerd[1481]: time="2026-01-17T00:20:37.458147225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m249n,Uid:a3c98a2f-bcb2-4019-8b39-98c736ccd677,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:37.787435 containerd[1481]: time="2026-01-17T00:20:37.787366777Z" level=error msg="Failed to destroy network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.787827 containerd[1481]: time="2026-01-17T00:20:37.787668125Z" level=error msg="Failed to destroy network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.793602 containerd[1481]: time="2026-01-17T00:20:37.793465948Z" level=error msg="encountered an error cleaning up failed sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.793602 containerd[1481]: time="2026-01-17T00:20:37.793565025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqpfh,Uid:3b5332e7-48f1-496f-8faa-d235dc24f5d8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.797041 containerd[1481]: time="2026-01-17T00:20:37.795325308Z" level=error msg="encountered an error cleaning up failed sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.797041 containerd[1481]: time="2026-01-17T00:20:37.795410106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m249n,Uid:a3c98a2f-bcb2-4019-8b39-98c736ccd677,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.805483 containerd[1481]: time="2026-01-17T00:20:37.805404941Z" level=error msg="Failed to destroy network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.806695 kubelet[2532]: E0117 00:20:37.806635 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.806968 kubelet[2532]: E0117 00:20:37.806752 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fqpfh" Jan 17 00:20:37.806968 kubelet[2532]: E0117 00:20:37.806779 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fqpfh" Jan 17 00:20:37.806968 kubelet[2532]: E0117 00:20:37.806842 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fqpfh_kube-system(3b5332e7-48f1-496f-8faa-d235dc24f5d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fqpfh_kube-system(3b5332e7-48f1-496f-8faa-d235dc24f5d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fqpfh" podUID="3b5332e7-48f1-496f-8faa-d235dc24f5d8" Jan 17 00:20:37.807783 containerd[1481]: time="2026-01-17T00:20:37.807532155Z" level=error msg="encountered an error cleaning up failed sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.807783 containerd[1481]: time="2026-01-17T00:20:37.807599879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ntxlc,Uid:a20c7c12-692e-4309-b8a8-a42052435b98,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.808472 kubelet[2532]: E0117 00:20:37.808433 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.808716 kubelet[2532]: E0117 00:20:37.808506 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-m249n" Jan 17 00:20:37.808716 kubelet[2532]: E0117 00:20:37.808529 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-m249n" Jan 17 00:20:37.808716 kubelet[2532]: E0117 00:20:37.808597 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-m249n_calico-system(a3c98a2f-bcb2-4019-8b39-98c736ccd677)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-m249n_calico-system(a3c98a2f-bcb2-4019-8b39-98c736ccd677)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:20:37.809422 kubelet[2532]: E0117 00:20:37.808776 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.809422 kubelet[2532]: E0117 00:20:37.808798 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ntxlc" Jan 17 00:20:37.809422 kubelet[2532]: E0117 00:20:37.808812 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ntxlc" Jan 17 00:20:37.809513 kubelet[2532]: E0117 00:20:37.808841 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ntxlc_kube-system(a20c7c12-692e-4309-b8a8-a42052435b98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ntxlc_kube-system(a20c7c12-692e-4309-b8a8-a42052435b98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ntxlc" podUID="a20c7c12-692e-4309-b8a8-a42052435b98" Jan 17 00:20:37.817766 containerd[1481]: time="2026-01-17T00:20:37.817601637Z" level=error msg="Failed to destroy network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.818409 containerd[1481]: time="2026-01-17T00:20:37.818379700Z" level=error msg="encountered an error cleaning up failed sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.818540 containerd[1481]: time="2026-01-17T00:20:37.818510592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-s5fxr,Uid:b34f9844-1f24-4158-8f3c-e8308ca5c340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.821994 containerd[1481]: time="2026-01-17T00:20:37.821336242Z" level=error msg="Failed to destroy network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.823436 containerd[1481]: time="2026-01-17T00:20:37.822876966Z" level=error msg="encountered an error cleaning up failed sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.823436 containerd[1481]: time="2026-01-17T00:20:37.822956757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-rsvbr,Uid:7cd0417c-a83c-4bf0-96f8-9680bbeb055b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.824125 kubelet[2532]: E0117 00:20:37.823301 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.824125 kubelet[2532]: E0117 00:20:37.823417 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" Jan 17 00:20:37.824125 kubelet[2532]: E0117 00:20:37.823456 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" Jan 17 00:20:37.826143 kubelet[2532]: E0117 00:20:37.823513 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599ddd4698-rsvbr_calico-apiserver(7cd0417c-a83c-4bf0-96f8-9680bbeb055b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599ddd4698-rsvbr_calico-apiserver(7cd0417c-a83c-4bf0-96f8-9680bbeb055b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:20:37.826143 kubelet[2532]: E0117 00:20:37.823560 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.826143 kubelet[2532]: E0117 00:20:37.823579 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" Jan 17 00:20:37.826345 kubelet[2532]: E0117 00:20:37.823592 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" Jan 17 00:20:37.826345 kubelet[2532]: E0117 00:20:37.823617 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599ddd4698-s5fxr_calico-apiserver(b34f9844-1f24-4158-8f3c-e8308ca5c340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599ddd4698-s5fxr_calico-apiserver(b34f9844-1f24-4158-8f3c-e8308ca5c340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:20:37.836411 containerd[1481]: time="2026-01-17T00:20:37.836322383Z" level=error msg="Failed to destroy network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.836953 containerd[1481]: time="2026-01-17T00:20:37.836917930Z" level=error msg="encountered an error cleaning up failed sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.837205 containerd[1481]: time="2026-01-17T00:20:37.837137047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-559c9bff5c-pgg2b,Uid:1837a900-7d52-485d-8ce9-13ccc023b76c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.837620 kubelet[2532]: E0117 00:20:37.837568 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.837690 kubelet[2532]: E0117 00:20:37.837636 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-559c9bff5c-pgg2b" Jan 17 00:20:37.837690 kubelet[2532]: E0117 00:20:37.837666 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-559c9bff5c-pgg2b" Jan 17 00:20:37.837750 kubelet[2532]: E0117 00:20:37.837724 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-559c9bff5c-pgg2b_calico-system(1837a900-7d52-485d-8ce9-13ccc023b76c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-559c9bff5c-pgg2b_calico-system(1837a900-7d52-485d-8ce9-13ccc023b76c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-559c9bff5c-pgg2b" podUID="1837a900-7d52-485d-8ce9-13ccc023b76c" Jan 17 00:20:37.849569 containerd[1481]: time="2026-01-17T00:20:37.849503466Z" level=error msg="Failed to destroy network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.849887 containerd[1481]: time="2026-01-17T00:20:37.849859669Z" level=error msg="encountered an error cleaning up failed sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.849994 containerd[1481]: time="2026-01-17T00:20:37.849918820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d567dbb6-j2ffv,Uid:75ed374b-4149-4248-8b00-b1cb0ceb9572,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.850550 kubelet[2532]: E0117 00:20:37.850336 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:37.850550 kubelet[2532]: E0117 00:20:37.850406 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" Jan 17 00:20:37.850550 kubelet[2532]: E0117 00:20:37.850428 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" Jan 17 00:20:37.850692 kubelet[2532]: E0117 00:20:37.850498 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d567dbb6-j2ffv_calico-system(75ed374b-4149-4248-8b00-b1cb0ceb9572)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d567dbb6-j2ffv_calico-system(75ed374b-4149-4248-8b00-b1cb0ceb9572)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:20:38.168408 systemd[1]: Created slice kubepods-besteffort-pod785ca1fd_8ad2_4e63_be23_ced8350e2045.slice - libcontainer container kubepods-besteffort-pod785ca1fd_8ad2_4e63_be23_ced8350e2045.slice. Jan 17 00:20:38.175850 containerd[1481]: time="2026-01-17T00:20:38.175779729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbn95,Uid:785ca1fd-8ad2-4e63-be23-ced8350e2045,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:38.265108 containerd[1481]: time="2026-01-17T00:20:38.264980627Z" level=error msg="Failed to destroy network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.265485 containerd[1481]: time="2026-01-17T00:20:38.265434687Z" level=error msg="encountered an error cleaning up failed sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.265553 containerd[1481]: time="2026-01-17T00:20:38.265491040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbn95,Uid:785ca1fd-8ad2-4e63-be23-ced8350e2045,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.267510 kubelet[2532]: E0117 00:20:38.267442 2532 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.267596 kubelet[2532]: E0117 00:20:38.267524 2532 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pbn95" Jan 17 00:20:38.267596 kubelet[2532]: E0117 00:20:38.267548 2532 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pbn95" Jan 17 00:20:38.267659 kubelet[2532]: E0117 00:20:38.267613 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:38.269025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d-shm.mount: Deactivated successfully. Jan 17 00:20:38.347438 kubelet[2532]: I0117 00:20:38.347189 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:20:38.351821 kubelet[2532]: I0117 00:20:38.351356 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:20:38.353294 containerd[1481]: time="2026-01-17T00:20:38.352952753Z" level=info msg="StopPodSandbox for \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\"" Jan 17 00:20:38.357121 containerd[1481]: time="2026-01-17T00:20:38.356761908Z" level=info msg="Ensure that sandbox ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9 in task-service has been cleanup successfully" Jan 17 00:20:38.360171 containerd[1481]: time="2026-01-17T00:20:38.360113984Z" level=info msg="StopPodSandbox for \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\"" Jan 17 00:20:38.360627 containerd[1481]: time="2026-01-17T00:20:38.360606393Z" level=info msg="Ensure that sandbox a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0 in task-service has been cleanup successfully" Jan 17 00:20:38.363692 kubelet[2532]: I0117 00:20:38.363654 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:20:38.366986 containerd[1481]: time="2026-01-17T00:20:38.366776391Z" level=info msg="StopPodSandbox for \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\"" Jan 17 00:20:38.367101 containerd[1481]: time="2026-01-17T00:20:38.367034419Z" level=info msg="Ensure that sandbox e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812 in task-service has been cleanup successfully" Jan 17 00:20:38.370194 kubelet[2532]: I0117 00:20:38.369906 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:20:38.372282 containerd[1481]: time="2026-01-17T00:20:38.372123069Z" level=info msg="StopPodSandbox for \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\"" Jan 17 00:20:38.372775 containerd[1481]: time="2026-01-17T00:20:38.372730566Z" level=info msg="Ensure that sandbox 1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a in task-service has been cleanup successfully" Jan 17 00:20:38.375882 kubelet[2532]: I0117 00:20:38.375460 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:20:38.378088 containerd[1481]: time="2026-01-17T00:20:38.377915826Z" level=info msg="StopPodSandbox for \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\"" Jan 17 00:20:38.379561 containerd[1481]: time="2026-01-17T00:20:38.379522599Z" level=info msg="Ensure that sandbox 4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd in task-service has been cleanup successfully" Jan 17 00:20:38.390140 kubelet[2532]: I0117 00:20:38.389614 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:20:38.415690 containerd[1481]: time="2026-01-17T00:20:38.415638087Z" level=info msg="StopPodSandbox for \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\"" Jan 17 00:20:38.420893 containerd[1481]: time="2026-01-17T00:20:38.420558542Z" level=info msg="Ensure that sandbox 71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d in task-service has been cleanup successfully" Jan 17 00:20:38.440191 kubelet[2532]: I0117 00:20:38.439553 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:20:38.443150 containerd[1481]: time="2026-01-17T00:20:38.443105631Z" level=info msg="StopPodSandbox for \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\"" Jan 17 00:20:38.445571 kubelet[2532]: I0117 00:20:38.444874 2532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:20:38.450370 containerd[1481]: time="2026-01-17T00:20:38.450319223Z" level=info msg="Ensure that sandbox 062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d in task-service has been cleanup successfully" Jan 17 00:20:38.452078 containerd[1481]: time="2026-01-17T00:20:38.445985953Z" level=info msg="StopPodSandbox for \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\"" Jan 17 00:20:38.453721 containerd[1481]: time="2026-01-17T00:20:38.453414110Z" level=info msg="Ensure that sandbox e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689 in task-service has been cleanup successfully" Jan 17 00:20:38.502319 containerd[1481]: time="2026-01-17T00:20:38.502242330Z" level=error msg="StopPodSandbox for \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\" failed" error="failed to destroy network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.502796 kubelet[2532]: E0117 00:20:38.502762 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:20:38.503007 kubelet[2532]: E0117 00:20:38.502916 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9"} Jan 17 00:20:38.503176 kubelet[2532]: E0117 00:20:38.503160 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3c98a2f-bcb2-4019-8b39-98c736ccd677\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.503467 kubelet[2532]: E0117 00:20:38.503354 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3c98a2f-bcb2-4019-8b39-98c736ccd677\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:20:38.530417 containerd[1481]: time="2026-01-17T00:20:38.529709018Z" level=error msg="StopPodSandbox for \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\" failed" error="failed to destroy network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.530541 kubelet[2532]: E0117 00:20:38.530091 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:20:38.530541 kubelet[2532]: E0117 00:20:38.530148 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d"} Jan 17 00:20:38.530541 kubelet[2532]: E0117 00:20:38.530182 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"785ca1fd-8ad2-4e63-be23-ced8350e2045\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.531177 kubelet[2532]: E0117 00:20:38.530213 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"785ca1fd-8ad2-4e63-be23-ced8350e2045\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:38.538001 containerd[1481]: time="2026-01-17T00:20:38.537914954Z" level=error msg="StopPodSandbox for \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\" failed" error="failed to destroy network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.540185 kubelet[2532]: E0117 00:20:38.538527 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:20:38.540185 kubelet[2532]: E0117 00:20:38.538603 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0"} Jan 17 00:20:38.540185 kubelet[2532]: E0117 00:20:38.538650 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b34f9844-1f24-4158-8f3c-e8308ca5c340\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.540185 kubelet[2532]: E0117 00:20:38.538680 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b34f9844-1f24-4158-8f3c-e8308ca5c340\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:20:38.540741 containerd[1481]: time="2026-01-17T00:20:38.540688604Z" level=error msg="StopPodSandbox for \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\" failed" error="failed to destroy network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.541465 kubelet[2532]: E0117 00:20:38.541227 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:20:38.541465 kubelet[2532]: E0117 00:20:38.541313 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812"} Jan 17 00:20:38.541465 kubelet[2532]: E0117 00:20:38.541355 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cd0417c-a83c-4bf0-96f8-9680bbeb055b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.541465 kubelet[2532]: E0117 00:20:38.541395 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cd0417c-a83c-4bf0-96f8-9680bbeb055b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:20:38.562403 containerd[1481]: time="2026-01-17T00:20:38.562335800Z" level=error msg="StopPodSandbox for \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\" failed" error="failed to destroy network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.563085 kubelet[2532]: E0117 00:20:38.562850 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:20:38.563085 kubelet[2532]: E0117 00:20:38.562919 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a"} Jan 17 00:20:38.563085 kubelet[2532]: E0117 00:20:38.562969 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75ed374b-4149-4248-8b00-b1cb0ceb9572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.563085 kubelet[2532]: E0117 00:20:38.563008 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75ed374b-4149-4248-8b00-b1cb0ceb9572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:20:38.584204 containerd[1481]: time="2026-01-17T00:20:38.584118272Z" level=error msg="StopPodSandbox for \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\" failed" error="failed to destroy network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.584593 containerd[1481]: time="2026-01-17T00:20:38.584331022Z" level=error msg="StopPodSandbox for \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\" failed" error="failed to destroy network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.585457 kubelet[2532]: E0117 00:20:38.585042 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:20:38.585457 kubelet[2532]: E0117 00:20:38.585122 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d"} Jan 17 00:20:38.585457 kubelet[2532]: E0117 00:20:38.585170 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a20c7c12-692e-4309-b8a8-a42052435b98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.585457 kubelet[2532]: E0117 00:20:38.585211 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a20c7c12-692e-4309-b8a8-a42052435b98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ntxlc" podUID="a20c7c12-692e-4309-b8a8-a42052435b98" Jan 17 00:20:38.585815 kubelet[2532]: E0117 00:20:38.585295 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:20:38.585815 kubelet[2532]: E0117 00:20:38.585321 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689"} Jan 17 00:20:38.585815 kubelet[2532]: E0117 00:20:38.585346 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1837a900-7d52-485d-8ce9-13ccc023b76c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.585815 kubelet[2532]: E0117 00:20:38.585377 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1837a900-7d52-485d-8ce9-13ccc023b76c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-559c9bff5c-pgg2b" podUID="1837a900-7d52-485d-8ce9-13ccc023b76c" Jan 17 00:20:38.586390 containerd[1481]: time="2026-01-17T00:20:38.586341561Z" level=error msg="StopPodSandbox for \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\" failed" error="failed to destroy network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:38.587051 kubelet[2532]: E0117 00:20:38.586819 2532 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:20:38.587051 kubelet[2532]: E0117 00:20:38.586898 2532 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd"} Jan 17 00:20:38.587051 kubelet[2532]: E0117 00:20:38.586938 2532 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b5332e7-48f1-496f-8faa-d235dc24f5d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:38.587051 kubelet[2532]: E0117 00:20:38.586997 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b5332e7-48f1-496f-8faa-d235dc24f5d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fqpfh" podUID="3b5332e7-48f1-496f-8faa-d235dc24f5d8" Jan 17 00:20:45.604036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074014415.mount: Deactivated successfully. Jan 17 00:20:45.693089 containerd[1481]: time="2026-01-17T00:20:45.691092387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:45.697492 containerd[1481]: time="2026-01-17T00:20:45.697410743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:20:45.725229 containerd[1481]: time="2026-01-17T00:20:45.725159752Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:45.726314 containerd[1481]: time="2026-01-17T00:20:45.726272870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:45.726968 containerd[1481]: time="2026-01-17T00:20:45.726940447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.38100553s" Jan 17 00:20:45.727015 containerd[1481]: time="2026-01-17T00:20:45.726973529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:20:45.806398 containerd[1481]: time="2026-01-17T00:20:45.806332157Z" level=info msg="CreateContainer within sandbox \"522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:20:45.857983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1768256591.mount: Deactivated successfully. Jan 17 00:20:45.874037 containerd[1481]: time="2026-01-17T00:20:45.873785551Z" level=info msg="CreateContainer within sandbox \"522c67b974b07919eb6beabf1636149b4003ac71b0273cb765c2e8865f3771f0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b7cfafa8820ad7ca41d64ea575ff9839e01d0e979583d349aa6ddf8a7d26f69\"" Jan 17 00:20:45.884013 containerd[1481]: time="2026-01-17T00:20:45.883760277Z" level=info msg="StartContainer for \"0b7cfafa8820ad7ca41d64ea575ff9839e01d0e979583d349aa6ddf8a7d26f69\"" Jan 17 00:20:45.995153 systemd[1]: Started cri-containerd-0b7cfafa8820ad7ca41d64ea575ff9839e01d0e979583d349aa6ddf8a7d26f69.scope - libcontainer container 0b7cfafa8820ad7ca41d64ea575ff9839e01d0e979583d349aa6ddf8a7d26f69. Jan 17 00:20:46.081213 containerd[1481]: time="2026-01-17T00:20:46.080890830Z" level=info msg="StartContainer for \"0b7cfafa8820ad7ca41d64ea575ff9839e01d0e979583d349aa6ddf8a7d26f69\" returns successfully" Jan 17 00:20:46.254948 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:20:46.256436 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:20:46.491421 containerd[1481]: time="2026-01-17T00:20:46.490347635Z" level=info msg="StopPodSandbox for \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\"" Jan 17 00:20:46.553490 kubelet[2532]: E0117 00:20:46.553345 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:46.663638 kubelet[2532]: I0117 00:20:46.654045 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fh2bp" podStartSLOduration=2.204300059 podStartE2EDuration="21.640060854s" podCreationTimestamp="2026-01-17 00:20:25 +0000 UTC" firstStartedPulling="2026-01-17 00:20:26.297433525 +0000 UTC m=+24.477124804" lastFinishedPulling="2026-01-17 00:20:45.73319432 +0000 UTC m=+43.912885599" observedRunningTime="2026-01-17 00:20:46.625330729 +0000 UTC m=+44.805022035" watchObservedRunningTime="2026-01-17 00:20:46.640060854 +0000 UTC m=+44.819752156" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.640 [INFO][3782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.641 [INFO][3782] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" iface="eth0" netns="/var/run/netns/cni-02b9084d-6795-e5cc-5e11-77bdc9df10d6" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.642 [INFO][3782] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" iface="eth0" netns="/var/run/netns/cni-02b9084d-6795-e5cc-5e11-77bdc9df10d6" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.643 [INFO][3782] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" iface="eth0" netns="/var/run/netns/cni-02b9084d-6795-e5cc-5e11-77bdc9df10d6" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.643 [INFO][3782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.643 [INFO][3782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.788 [INFO][3790] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.789 [INFO][3790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.790 [INFO][3790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.803 [WARNING][3790] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.803 [INFO][3790] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.807 [INFO][3790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:46.815393 containerd[1481]: 2026-01-17 00:20:46.809 [INFO][3782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:20:46.817680 systemd[1]: run-netns-cni\x2d02b9084d\x2d6795\x2de5cc\x2d5e11\x2d77bdc9df10d6.mount: Deactivated successfully. Jan 17 00:20:46.819652 containerd[1481]: time="2026-01-17T00:20:46.819445569Z" level=info msg="TearDown network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\" successfully" Jan 17 00:20:46.819652 containerd[1481]: time="2026-01-17T00:20:46.819484295Z" level=info msg="StopPodSandbox for \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\" returns successfully" Jan 17 00:20:46.902304 kubelet[2532]: I0117 00:20:46.900858 2532 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-ca-bundle\") pod \"1837a900-7d52-485d-8ce9-13ccc023b76c\" (UID: \"1837a900-7d52-485d-8ce9-13ccc023b76c\") " Jan 17 00:20:46.902304 kubelet[2532]: I0117 00:20:46.900973 2532 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2746n\" (UniqueName: \"kubernetes.io/projected/1837a900-7d52-485d-8ce9-13ccc023b76c-kube-api-access-2746n\") pod \"1837a900-7d52-485d-8ce9-13ccc023b76c\" (UID: \"1837a900-7d52-485d-8ce9-13ccc023b76c\") " Jan 17 00:20:46.902304 kubelet[2532]: I0117 00:20:46.901015 2532 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-backend-key-pair\") pod \"1837a900-7d52-485d-8ce9-13ccc023b76c\" (UID: \"1837a900-7d52-485d-8ce9-13ccc023b76c\") " Jan 17 00:20:46.923153 kubelet[2532]: I0117 00:20:46.920618 2532 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1837a900-7d52-485d-8ce9-13ccc023b76c" (UID: "1837a900-7d52-485d-8ce9-13ccc023b76c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:20:46.930372 kubelet[2532]: I0117 00:20:46.930184 2532 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1837a900-7d52-485d-8ce9-13ccc023b76c" (UID: "1837a900-7d52-485d-8ce9-13ccc023b76c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:20:46.931918 systemd[1]: var-lib-kubelet-pods-1837a900\x2d7d52\x2d485d\x2d8ce9\x2d13ccc023b76c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:20:46.937809 kubelet[2532]: I0117 00:20:46.935100 2532 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1837a900-7d52-485d-8ce9-13ccc023b76c-kube-api-access-2746n" (OuterVolumeSpecName: "kube-api-access-2746n") pod "1837a900-7d52-485d-8ce9-13ccc023b76c" (UID: "1837a900-7d52-485d-8ce9-13ccc023b76c"). InnerVolumeSpecName "kube-api-access-2746n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:20:46.939287 systemd[1]: var-lib-kubelet-pods-1837a900\x2d7d52\x2d485d\x2d8ce9\x2d13ccc023b76c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2746n.mount: Deactivated successfully. Jan 17 00:20:47.005360 kubelet[2532]: I0117 00:20:47.004936 2532 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-ca-bundle\") on node \"ci-4081.3.6-n-8d0945b27f\" DevicePath \"\"" Jan 17 00:20:47.005360 kubelet[2532]: I0117 00:20:47.004977 2532 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2746n\" (UniqueName: \"kubernetes.io/projected/1837a900-7d52-485d-8ce9-13ccc023b76c-kube-api-access-2746n\") on node \"ci-4081.3.6-n-8d0945b27f\" DevicePath \"\"" Jan 17 00:20:47.005360 kubelet[2532]: I0117 00:20:47.004989 2532 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1837a900-7d52-485d-8ce9-13ccc023b76c-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-8d0945b27f\" DevicePath \"\"" Jan 17 00:20:47.555335 kubelet[2532]: I0117 00:20:47.555230 2532 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:47.555918 kubelet[2532]: E0117 00:20:47.555866 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:47.561458 systemd[1]: Removed slice kubepods-besteffort-pod1837a900_7d52_485d_8ce9_13ccc023b76c.slice - libcontainer container kubepods-besteffort-pod1837a900_7d52_485d_8ce9_13ccc023b76c.slice. Jan 17 00:20:47.672831 systemd[1]: Created slice kubepods-besteffort-pod8f412fc1_0816_4220_80a1_194b624badc8.slice - libcontainer container kubepods-besteffort-pod8f412fc1_0816_4220_80a1_194b624badc8.slice. Jan 17 00:20:47.710494 kubelet[2532]: I0117 00:20:47.710379 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f412fc1-0816-4220-80a1-194b624badc8-whisker-backend-key-pair\") pod \"whisker-6b849fdd9-cjl8m\" (UID: \"8f412fc1-0816-4220-80a1-194b624badc8\") " pod="calico-system/whisker-6b849fdd9-cjl8m" Jan 17 00:20:47.710494 kubelet[2532]: I0117 00:20:47.710436 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckpt\" (UniqueName: \"kubernetes.io/projected/8f412fc1-0816-4220-80a1-194b624badc8-kube-api-access-wckpt\") pod \"whisker-6b849fdd9-cjl8m\" (UID: \"8f412fc1-0816-4220-80a1-194b624badc8\") " pod="calico-system/whisker-6b849fdd9-cjl8m" Jan 17 00:20:47.710494 kubelet[2532]: I0117 00:20:47.710471 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f412fc1-0816-4220-80a1-194b624badc8-whisker-ca-bundle\") pod \"whisker-6b849fdd9-cjl8m\" (UID: \"8f412fc1-0816-4220-80a1-194b624badc8\") " pod="calico-system/whisker-6b849fdd9-cjl8m" Jan 17 00:20:47.987964 containerd[1481]: time="2026-01-17T00:20:47.987542538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b849fdd9-cjl8m,Uid:8f412fc1-0816-4220-80a1-194b624badc8,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:48.163038 kubelet[2532]: I0117 00:20:48.162988 2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1837a900-7d52-485d-8ce9-13ccc023b76c" path="/var/lib/kubelet/pods/1837a900-7d52-485d-8ce9-13ccc023b76c/volumes" Jan 17 00:20:48.275018 systemd-networkd[1368]: cali12196b867c1: Link UP Jan 17 00:20:48.277316 systemd-networkd[1368]: cali12196b867c1: Gained carrier Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.114 [INFO][3861] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.134 [INFO][3861] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0 whisker-6b849fdd9- calico-system 8f412fc1-0816-4220-80a1-194b624badc8 934 0 2026-01-17 00:20:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b849fdd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f whisker-6b849fdd9-cjl8m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali12196b867c1 [] [] }} ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.134 [INFO][3861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.183 [INFO][3908] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" HandleID="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.184 [INFO][3908] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" HandleID="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"whisker-6b849fdd9-cjl8m", "timestamp":"2026-01-17 00:20:48.183231201 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.184 [INFO][3908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.184 [INFO][3908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.184 [INFO][3908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.195 [INFO][3908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.203 [INFO][3908] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.211 [INFO][3908] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.213 [INFO][3908] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.216 [INFO][3908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.216 [INFO][3908] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.222 [INFO][3908] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68 Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.229 [INFO][3908] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.237 [INFO][3908] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.1/26] block=192.168.59.0/26 handle="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.237 [INFO][3908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.1/26] handle="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.237 [INFO][3908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:48.296035 containerd[1481]: 2026-01-17 00:20:48.238 [INFO][3908] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.1/26] IPv6=[] ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" HandleID="k8s-pod-network.6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" Jan 17 00:20:48.299726 containerd[1481]: 2026-01-17 00:20:48.243 [INFO][3861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0", GenerateName:"whisker-6b849fdd9-", Namespace:"calico-system", SelfLink:"", UID:"8f412fc1-0816-4220-80a1-194b624badc8", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b849fdd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"whisker-6b849fdd9-cjl8m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12196b867c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:48.299726 containerd[1481]: 2026-01-17 00:20:48.243 [INFO][3861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.1/32] ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" Jan 17 00:20:48.299726 containerd[1481]: 2026-01-17 00:20:48.243 [INFO][3861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12196b867c1 ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" Jan 17 00:20:48.299726 containerd[1481]: 2026-01-17 00:20:48.259 [INFO][3861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" Jan 17 00:20:48.299726 containerd[1481]: 2026-01-17 00:20:48.261 [INFO][3861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0", GenerateName:"whisker-6b849fdd9-", Namespace:"calico-system", SelfLink:"", UID:"8f412fc1-0816-4220-80a1-194b624badc8", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b849fdd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68", Pod:"whisker-6b849fdd9-cjl8m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12196b867c1", MAC:"8e:89:56:2b:78:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:48.299726 containerd[1481]: 2026-01-17 00:20:48.289 [INFO][3861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68" Namespace="calico-system" Pod="whisker-6b849fdd9-cjl8m" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--6b849fdd9--cjl8m-eth0" Jan 17 00:20:48.363735 containerd[1481]: time="2026-01-17T00:20:48.363571881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:48.365057 containerd[1481]: time="2026-01-17T00:20:48.364731296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:48.365057 containerd[1481]: time="2026-01-17T00:20:48.364795072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:48.366635 containerd[1481]: time="2026-01-17T00:20:48.366557474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:48.447037 systemd[1]: Started cri-containerd-6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68.scope - libcontainer container 6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68. Jan 17 00:20:48.602855 containerd[1481]: time="2026-01-17T00:20:48.601045100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b849fdd9-cjl8m,Uid:8f412fc1-0816-4220-80a1-194b624badc8,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c4b748a6033e9957f0f7aa446fa7ac25c2de3d057a7f76e4f1140b595cb8b68\"" Jan 17 00:20:48.610762 containerd[1481]: time="2026-01-17T00:20:48.610344544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:20:48.754308 kernel: bpftool[4000]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:20:48.942788 containerd[1481]: time="2026-01-17T00:20:48.942707732Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:48.952759 containerd[1481]: time="2026-01-17T00:20:48.944428978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:20:48.952759 containerd[1481]: time="2026-01-17T00:20:48.944501381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:20:48.953483 kubelet[2532]: E0117 00:20:48.953094 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:48.953483 kubelet[2532]: E0117 00:20:48.953179 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:48.964003 kubelet[2532]: E0117 00:20:48.958103 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b849fdd9-cjl8m_calico-system(8f412fc1-0816-4220-80a1-194b624badc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:48.965635 containerd[1481]: time="2026-01-17T00:20:48.965601478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:20:49.064338 systemd-networkd[1368]: vxlan.calico: Link UP Jan 17 00:20:49.064349 systemd-networkd[1368]: vxlan.calico: Gained carrier Jan 17 00:20:49.160604 containerd[1481]: time="2026-01-17T00:20:49.160558620Z" level=info msg="StopPodSandbox for \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\"" Jan 17 00:20:49.161845 containerd[1481]: time="2026-01-17T00:20:49.161584726Z" level=info msg="StopPodSandbox for \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\"" Jan 17 00:20:49.272274 containerd[1481]: time="2026-01-17T00:20:49.271795130Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:49.277625 containerd[1481]: time="2026-01-17T00:20:49.277181651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:20:49.277625 containerd[1481]: time="2026-01-17T00:20:49.277290964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:20:49.281713 kubelet[2532]: E0117 00:20:49.279833 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:49.281713 kubelet[2532]: E0117 00:20:49.279892 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:49.281713 kubelet[2532]: E0117 00:20:49.279974 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b849fdd9-cjl8m_calico-system(8f412fc1-0816-4220-80a1-194b624badc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:49.281938 kubelet[2532]: E0117 00:20:49.280032 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.274 [INFO][4054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.274 [INFO][4054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" iface="eth0" netns="/var/run/netns/cni-54b5bd84-e9bc-c636-02c5-8c7e6851278f" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.274 [INFO][4054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" iface="eth0" netns="/var/run/netns/cni-54b5bd84-e9bc-c636-02c5-8c7e6851278f" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.277 [INFO][4054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" iface="eth0" netns="/var/run/netns/cni-54b5bd84-e9bc-c636-02c5-8c7e6851278f" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.277 [INFO][4054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.277 [INFO][4054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.328 [INFO][4068] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.328 [INFO][4068] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.328 [INFO][4068] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.338 [WARNING][4068] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.338 [INFO][4068] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.340 [INFO][4068] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:49.349400 containerd[1481]: 2026-01-17 00:20:49.344 [INFO][4054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:20:49.352341 containerd[1481]: time="2026-01-17T00:20:49.350057914Z" level=info msg="TearDown network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\" successfully" Jan 17 00:20:49.352341 containerd[1481]: time="2026-01-17T00:20:49.350092855Z" level=info msg="StopPodSandbox for \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\" returns successfully" Jan 17 00:20:49.356787 systemd[1]: run-netns-cni\x2d54b5bd84\x2de9bc\x2dc636\x2d02c5\x2d8c7e6851278f.mount: Deactivated successfully. Jan 17 00:20:49.357411 containerd[1481]: time="2026-01-17T00:20:49.357379831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-rsvbr,Uid:7cd0417c-a83c-4bf0-96f8-9680bbeb055b,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.290 [INFO][4055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.292 [INFO][4055] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" iface="eth0" netns="/var/run/netns/cni-ccae32a8-2260-df12-32b2-3f09c7c3b407" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.293 [INFO][4055] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" iface="eth0" netns="/var/run/netns/cni-ccae32a8-2260-df12-32b2-3f09c7c3b407" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.295 [INFO][4055] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" iface="eth0" netns="/var/run/netns/cni-ccae32a8-2260-df12-32b2-3f09c7c3b407" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.295 [INFO][4055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.295 [INFO][4055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.334 [INFO][4073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.334 [INFO][4073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.340 [INFO][4073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.353 [WARNING][4073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.355 [INFO][4073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.360 [INFO][4073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:49.366322 containerd[1481]: 2026-01-17 00:20:49.364 [INFO][4055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:20:49.379314 containerd[1481]: time="2026-01-17T00:20:49.378159652Z" level=info msg="TearDown network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\" successfully" Jan 17 00:20:49.379314 containerd[1481]: time="2026-01-17T00:20:49.378211707Z" level=info msg="StopPodSandbox for \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\" returns successfully" Jan 17 00:20:49.380445 systemd[1]: run-netns-cni\x2dccae32a8\x2d2260\x2ddf12\x2d32b2\x2d3f09c7c3b407.mount: Deactivated successfully. Jan 17 00:20:49.389687 containerd[1481]: time="2026-01-17T00:20:49.389562702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m249n,Uid:a3c98a2f-bcb2-4019-8b39-98c736ccd677,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:49.583338 kubelet[2532]: E0117 00:20:49.581761 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8" Jan 17 00:20:49.734866 systemd-networkd[1368]: calibf368edc330: Link UP Jan 17 00:20:49.741074 systemd-networkd[1368]: calibf368edc330: Gained carrier Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.516 [INFO][4083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0 goldmane-7c778bb748- calico-system a3c98a2f-bcb2-4019-8b39-98c736ccd677 949 0 2026-01-17 00:20:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f goldmane-7c778bb748-m249n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibf368edc330 [] [] }} ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.516 [INFO][4083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.621 [INFO][4109] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" HandleID="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.621 [INFO][4109] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" HandleID="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371a40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"goldmane-7c778bb748-m249n", "timestamp":"2026-01-17 00:20:49.621506053 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.621 [INFO][4109] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.621 [INFO][4109] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.621 [INFO][4109] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.644 [INFO][4109] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.651 [INFO][4109] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.677 [INFO][4109] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.680 [INFO][4109] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.686 [INFO][4109] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.686 [INFO][4109] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.689 [INFO][4109] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3 Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.699 [INFO][4109] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.710 [INFO][4109] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.2/26] block=192.168.59.0/26 handle="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.711 [INFO][4109] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.2/26] handle="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.711 [INFO][4109] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:49.779666 containerd[1481]: 2026-01-17 00:20:49.711 [INFO][4109] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.2/26] IPv6=[] ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" HandleID="k8s-pod-network.b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.780791 containerd[1481]: 2026-01-17 00:20:49.717 [INFO][4083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a3c98a2f-bcb2-4019-8b39-98c736ccd677", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"goldmane-7c778bb748-m249n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf368edc330", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.780791 containerd[1481]: 2026-01-17 00:20:49.717 [INFO][4083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.2/32] ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.780791 containerd[1481]: 2026-01-17 00:20:49.717 [INFO][4083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf368edc330 ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.780791 containerd[1481]: 2026-01-17 00:20:49.743 [INFO][4083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.780791 containerd[1481]: 2026-01-17 00:20:49.753 [INFO][4083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a3c98a2f-bcb2-4019-8b39-98c736ccd677", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3", Pod:"goldmane-7c778bb748-m249n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf368edc330", MAC:"2a:71:96:3b:64:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.780791 containerd[1481]: 2026-01-17 00:20:49.775 [INFO][4083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3" Namespace="calico-system" Pod="goldmane-7c778bb748-m249n" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:20:49.833769 containerd[1481]: time="2026-01-17T00:20:49.833137504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:49.833769 containerd[1481]: time="2026-01-17T00:20:49.833305996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:49.833769 containerd[1481]: time="2026-01-17T00:20:49.833398934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:49.835452 containerd[1481]: time="2026-01-17T00:20:49.834073888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:49.860867 systemd-networkd[1368]: cali30162db12ff: Link UP Jan 17 00:20:49.864687 systemd-networkd[1368]: cali30162db12ff: Gained carrier Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.521 [INFO][4084] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0 calico-apiserver-599ddd4698- calico-apiserver 7cd0417c-a83c-4bf0-96f8-9680bbeb055b 948 0 2026-01-17 00:20:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599ddd4698 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f calico-apiserver-599ddd4698-rsvbr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30162db12ff [] [] }} ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.522 [INFO][4084] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.641 [INFO][4115] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" HandleID="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.642 [INFO][4115] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" HandleID="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"calico-apiserver-599ddd4698-rsvbr", "timestamp":"2026-01-17 00:20:49.641306835 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.642 [INFO][4115] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.711 [INFO][4115] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.711 [INFO][4115] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.756 [INFO][4115] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.775 [INFO][4115] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.784 [INFO][4115] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.789 [INFO][4115] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.792 [INFO][4115] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.793 [INFO][4115] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.796 [INFO][4115] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.804 [INFO][4115] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.816 [INFO][4115] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.3/26] block=192.168.59.0/26 handle="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.816 [INFO][4115] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.3/26] handle="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:49.902910 containerd[1481]: 2026-01-17 00:20:49.816 [INFO][4115] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:49.903875 containerd[1481]: 2026-01-17 00:20:49.816 [INFO][4115] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.3/26] IPv6=[] ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" HandleID="k8s-pod-network.5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.903875 containerd[1481]: 2026-01-17 00:20:49.826 [INFO][4084] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cd0417c-a83c-4bf0-96f8-9680bbeb055b", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"calico-apiserver-599ddd4698-rsvbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30162db12ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.903875 containerd[1481]: 2026-01-17 00:20:49.827 [INFO][4084] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.3/32] ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.903875 containerd[1481]: 2026-01-17 00:20:49.827 [INFO][4084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30162db12ff ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.903875 containerd[1481]: 2026-01-17 00:20:49.867 [INFO][4084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.905103 containerd[1481]: 2026-01-17 00:20:49.867 [INFO][4084] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cd0417c-a83c-4bf0-96f8-9680bbeb055b", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b", Pod:"calico-apiserver-599ddd4698-rsvbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30162db12ff", MAC:"62:1d:f5:ea:e7:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.905103 containerd[1481]: 2026-01-17 00:20:49.894 [INFO][4084] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-rsvbr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:20:49.908838 systemd[1]: Started cri-containerd-b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3.scope - libcontainer container b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3. Jan 17 00:20:49.958742 containerd[1481]: time="2026-01-17T00:20:49.958357796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:49.958742 containerd[1481]: time="2026-01-17T00:20:49.958437218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:49.958742 containerd[1481]: time="2026-01-17T00:20:49.958449494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:49.958742 containerd[1481]: time="2026-01-17T00:20:49.958550648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:50.008173 systemd[1]: Started cri-containerd-5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b.scope - libcontainer container 5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b. Jan 17 00:20:50.197289 containerd[1481]: time="2026-01-17T00:20:50.197101770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m249n,Uid:a3c98a2f-bcb2-4019-8b39-98c736ccd677,Namespace:calico-system,Attempt:1,} returns sandbox id \"b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3\"" Jan 17 00:20:50.200207 systemd-networkd[1368]: cali12196b867c1: Gained IPv6LL Jan 17 00:20:50.202241 containerd[1481]: time="2026-01-17T00:20:50.201638186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:20:50.284737 containerd[1481]: time="2026-01-17T00:20:50.284496954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-rsvbr,Uid:7cd0417c-a83c-4bf0-96f8-9680bbeb055b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b\"" Jan 17 00:20:50.327709 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Jan 17 00:20:50.528528 containerd[1481]: time="2026-01-17T00:20:50.527882328Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:50.529522 containerd[1481]: time="2026-01-17T00:20:50.529321919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:20:50.529522 containerd[1481]: time="2026-01-17T00:20:50.529451568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:50.529743 kubelet[2532]: E0117 00:20:50.529694 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:20:50.530491 kubelet[2532]: E0117 00:20:50.529765 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:20:50.530491 kubelet[2532]: E0117 00:20:50.530323 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-m249n_calico-system(a3c98a2f-bcb2-4019-8b39-98c736ccd677): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:50.530608 kubelet[2532]: E0117 00:20:50.530415 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:20:50.531507 containerd[1481]: time="2026-01-17T00:20:50.531481176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:20:50.587993 kubelet[2532]: E0117 00:20:50.587838 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:20:50.592871 kubelet[2532]: E0117 00:20:50.592819 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8" Jan 17 00:20:50.899415 containerd[1481]: time="2026-01-17T00:20:50.899111741Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:50.900413 containerd[1481]: time="2026-01-17T00:20:50.900299212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:50.900413 containerd[1481]: time="2026-01-17T00:20:50.900304403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:20:50.901337 kubelet[2532]: E0117 00:20:50.900901 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:50.901337 kubelet[2532]: E0117 00:20:50.900967 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:50.901337 kubelet[2532]: E0117 00:20:50.901061 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-599ddd4698-rsvbr_calico-apiserver(7cd0417c-a83c-4bf0-96f8-9680bbeb055b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:50.901337 kubelet[2532]: E0117 00:20:50.901101 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:20:50.903484 systemd-networkd[1368]: calibf368edc330: Gained IPv6LL Jan 17 00:20:51.160386 containerd[1481]: time="2026-01-17T00:20:51.160162724Z" level=info msg="StopPodSandbox for \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\"" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.238 [INFO][4269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.239 [INFO][4269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" iface="eth0" netns="/var/run/netns/cni-a54526ad-7045-260e-7ce3-e127bf24ec9b" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.239 [INFO][4269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" iface="eth0" netns="/var/run/netns/cni-a54526ad-7045-260e-7ce3-e127bf24ec9b" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.241 [INFO][4269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" iface="eth0" netns="/var/run/netns/cni-a54526ad-7045-260e-7ce3-e127bf24ec9b" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.241 [INFO][4269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.241 [INFO][4269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.279 [INFO][4276] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.279 [INFO][4276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.279 [INFO][4276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.289 [WARNING][4276] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.289 [INFO][4276] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.293 [INFO][4276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:51.301285 containerd[1481]: 2026-01-17 00:20:51.296 [INFO][4269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:20:51.302499 containerd[1481]: time="2026-01-17T00:20:51.302115598Z" level=info msg="TearDown network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\" successfully" Jan 17 00:20:51.302499 containerd[1481]: time="2026-01-17T00:20:51.302170523Z" level=info msg="StopPodSandbox for \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\" returns successfully" Jan 17 00:20:51.304018 systemd[1]: run-netns-cni\x2da54526ad\x2d7045\x2d260e\x2d7ce3\x2de127bf24ec9b.mount: Deactivated successfully. Jan 17 00:20:51.307949 containerd[1481]: time="2026-01-17T00:20:51.307738895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbn95,Uid:785ca1fd-8ad2-4e63-be23-ced8350e2045,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:51.479606 systemd-networkd[1368]: cali30162db12ff: Gained IPv6LL Jan 17 00:20:51.505771 systemd-networkd[1368]: cali89ecd7ff38e: Link UP Jan 17 00:20:51.508465 systemd-networkd[1368]: cali89ecd7ff38e: Gained carrier Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.383 [INFO][4284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0 csi-node-driver- calico-system 785ca1fd-8ad2-4e63-be23-ced8350e2045 987 0 2026-01-17 00:20:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f csi-node-driver-pbn95 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali89ecd7ff38e [] [] }} ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.383 [INFO][4284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.430 [INFO][4296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" HandleID="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.430 [INFO][4296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" HandleID="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"csi-node-driver-pbn95", "timestamp":"2026-01-17 00:20:51.430556485 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.430 [INFO][4296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.430 [INFO][4296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.430 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.439 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.446 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.454 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.456 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.459 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.459 [INFO][4296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.462 [INFO][4296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3 Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.468 [INFO][4296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.495 [INFO][4296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.4/26] block=192.168.59.0/26 handle="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.495 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.4/26] handle="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.495 [INFO][4296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:51.537910 containerd[1481]: 2026-01-17 00:20:51.495 [INFO][4296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.4/26] IPv6=[] ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" HandleID="k8s-pod-network.ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.541125 containerd[1481]: 2026-01-17 00:20:51.500 [INFO][4284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"785ca1fd-8ad2-4e63-be23-ced8350e2045", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"csi-node-driver-pbn95", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89ecd7ff38e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:51.541125 containerd[1481]: 2026-01-17 00:20:51.500 [INFO][4284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.4/32] ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.541125 containerd[1481]: 2026-01-17 00:20:51.500 [INFO][4284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89ecd7ff38e ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.541125 containerd[1481]: 2026-01-17 00:20:51.509 [INFO][4284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.541125 containerd[1481]: 2026-01-17 00:20:51.511 [INFO][4284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"785ca1fd-8ad2-4e63-be23-ced8350e2045", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3", Pod:"csi-node-driver-pbn95", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89ecd7ff38e", MAC:"da:3c:ce:2f:ad:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:51.541125 containerd[1481]: 2026-01-17 00:20:51.530 [INFO][4284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3" Namespace="calico-system" Pod="csi-node-driver-pbn95" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:20:51.594015 containerd[1481]: time="2026-01-17T00:20:51.593717016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:51.594015 containerd[1481]: time="2026-01-17T00:20:51.593799867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:51.594015 containerd[1481]: time="2026-01-17T00:20:51.593829261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:51.594804 containerd[1481]: time="2026-01-17T00:20:51.594007643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:51.600437 kubelet[2532]: E0117 00:20:51.599595 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:20:51.635392 kubelet[2532]: E0117 00:20:51.604205 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:20:51.671670 systemd[1]: Started cri-containerd-ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3.scope - libcontainer container ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3. Jan 17 00:20:51.733512 containerd[1481]: time="2026-01-17T00:20:51.733165669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbn95,Uid:785ca1fd-8ad2-4e63-be23-ced8350e2045,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3\"" Jan 17 00:20:51.740137 containerd[1481]: time="2026-01-17T00:20:51.739601626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:20:52.083472 containerd[1481]: time="2026-01-17T00:20:52.083305434Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:52.085376 containerd[1481]: time="2026-01-17T00:20:52.084354145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:20:52.085376 containerd[1481]: time="2026-01-17T00:20:52.084459325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:20:52.085627 kubelet[2532]: E0117 00:20:52.084744 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:20:52.085627 kubelet[2532]: E0117 00:20:52.084801 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:20:52.085627 kubelet[2532]: E0117 00:20:52.084884 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:52.087163 containerd[1481]: time="2026-01-17T00:20:52.087122210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:20:52.453070 containerd[1481]: time="2026-01-17T00:20:52.452809819Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:52.454504 containerd[1481]: time="2026-01-17T00:20:52.454299206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:20:52.454675 containerd[1481]: time="2026-01-17T00:20:52.454469052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:20:52.454744 kubelet[2532]: E0117 00:20:52.454701 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:20:52.454792 kubelet[2532]: E0117 00:20:52.454755 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:20:52.454887 kubelet[2532]: E0117 00:20:52.454852 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:52.454956 kubelet[2532]: E0117 00:20:52.454916 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:52.604794 kubelet[2532]: E0117 00:20:52.604732 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:52.632402 systemd-networkd[1368]: cali89ecd7ff38e: Gained IPv6LL Jan 17 00:20:53.160952 containerd[1481]: time="2026-01-17T00:20:53.160385881Z" level=info msg="StopPodSandbox for \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\"" Jan 17 00:20:53.160952 containerd[1481]: time="2026-01-17T00:20:53.160417852Z" level=info msg="StopPodSandbox for \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\"" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.264 [INFO][4373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.264 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" iface="eth0" netns="/var/run/netns/cni-245df2de-ffd4-8b9f-6fbd-932882bb1de9" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.265 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" iface="eth0" netns="/var/run/netns/cni-245df2de-ffd4-8b9f-6fbd-932882bb1de9" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.266 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" iface="eth0" netns="/var/run/netns/cni-245df2de-ffd4-8b9f-6fbd-932882bb1de9" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.266 [INFO][4373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.266 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.308 [INFO][4387] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.308 [INFO][4387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.309 [INFO][4387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.320 [WARNING][4387] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.320 [INFO][4387] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.322 [INFO][4387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:53.329354 containerd[1481]: 2026-01-17 00:20:53.326 [INFO][4373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:20:53.334913 containerd[1481]: time="2026-01-17T00:20:53.331583461Z" level=info msg="TearDown network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\" successfully" Jan 17 00:20:53.334913 containerd[1481]: time="2026-01-17T00:20:53.331629385Z" level=info msg="StopPodSandbox for \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\" returns successfully" Jan 17 00:20:53.333994 systemd[1]: run-netns-cni\x2d245df2de\x2dffd4\x2d8b9f\x2d6fbd\x2d932882bb1de9.mount: Deactivated successfully. Jan 17 00:20:53.336226 kubelet[2532]: E0117 00:20:53.335959 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:53.338221 containerd[1481]: time="2026-01-17T00:20:53.338096469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqpfh,Uid:3b5332e7-48f1-496f-8faa-d235dc24f5d8,Namespace:kube-system,Attempt:1,}" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.277 [INFO][4374] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.277 [INFO][4374] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" iface="eth0" netns="/var/run/netns/cni-568de502-fba8-6499-098b-bbfffdeec361" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.278 [INFO][4374] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" iface="eth0" netns="/var/run/netns/cni-568de502-fba8-6499-098b-bbfffdeec361" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.278 [INFO][4374] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" iface="eth0" netns="/var/run/netns/cni-568de502-fba8-6499-098b-bbfffdeec361" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.278 [INFO][4374] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.278 [INFO][4374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.321 [INFO][4392] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.321 [INFO][4392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.322 [INFO][4392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.345 [WARNING][4392] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.345 [INFO][4392] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.348 [INFO][4392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:53.366019 containerd[1481]: 2026-01-17 00:20:53.358 [INFO][4374] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:20:53.374726 containerd[1481]: time="2026-01-17T00:20:53.374409889Z" level=info msg="TearDown network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\" successfully" Jan 17 00:20:53.374726 containerd[1481]: time="2026-01-17T00:20:53.374473478Z" level=info msg="StopPodSandbox for \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\" returns successfully" Jan 17 00:20:53.375781 systemd[1]: run-netns-cni\x2d568de502\x2dfba8\x2d6499\x2d098b\x2dbbfffdeec361.mount: Deactivated successfully. Jan 17 00:20:53.385498 kubelet[2532]: E0117 00:20:53.385177 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:53.386025 containerd[1481]: time="2026-01-17T00:20:53.385920368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ntxlc,Uid:a20c7c12-692e-4309-b8a8-a42052435b98,Namespace:kube-system,Attempt:1,}" Jan 17 00:20:53.572055 systemd-networkd[1368]: cali665a93d3483: Link UP Jan 17 00:20:53.574240 systemd-networkd[1368]: cali665a93d3483: Gained carrier Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.453 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0 coredns-66bc5c9577- kube-system 3b5332e7-48f1-496f-8faa-d235dc24f5d8 1020 0 2026-01-17 00:20:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f coredns-66bc5c9577-fqpfh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali665a93d3483 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.454 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.502 [INFO][4421] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" HandleID="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.502 [INFO][4421] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" HandleID="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"coredns-66bc5c9577-fqpfh", "timestamp":"2026-01-17 00:20:53.50199754 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.502 [INFO][4421] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.502 [INFO][4421] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.502 [INFO][4421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.519 [INFO][4421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.527 [INFO][4421] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.537 [INFO][4421] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.542 [INFO][4421] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.546 [INFO][4421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.547 [INFO][4421] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.549 [INFO][4421] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16 Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.555 [INFO][4421] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.565 [INFO][4421] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.5/26] block=192.168.59.0/26 handle="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.565 [INFO][4421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.5/26] handle="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.565 [INFO][4421] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:53.601638 containerd[1481]: 2026-01-17 00:20:53.565 [INFO][4421] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.5/26] IPv6=[] ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" HandleID="k8s-pod-network.53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.605456 containerd[1481]: 2026-01-17 00:20:53.567 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b5332e7-48f1-496f-8faa-d235dc24f5d8", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"coredns-66bc5c9577-fqpfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali665a93d3483", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:53.605456 containerd[1481]: 2026-01-17 00:20:53.568 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.5/32] ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.605456 containerd[1481]: 2026-01-17 00:20:53.568 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali665a93d3483 ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.605456 containerd[1481]: 2026-01-17 00:20:53.576 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.605732 containerd[1481]: 2026-01-17 00:20:53.576 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b5332e7-48f1-496f-8faa-d235dc24f5d8", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16", Pod:"coredns-66bc5c9577-fqpfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali665a93d3483", MAC:"56:b6:20:8d:e1:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:53.605732 containerd[1481]: 2026-01-17 00:20:53.598 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16" Namespace="kube-system" Pod="coredns-66bc5c9577-fqpfh" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:20:53.610399 kubelet[2532]: E0117 00:20:53.609909 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:20:53.664298 containerd[1481]: time="2026-01-17T00:20:53.663707610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:53.664298 containerd[1481]: time="2026-01-17T00:20:53.663808151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:53.664298 containerd[1481]: time="2026-01-17T00:20:53.663824612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:53.669471 containerd[1481]: time="2026-01-17T00:20:53.666785654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:53.703526 systemd-networkd[1368]: cali0d1c0959d08: Link UP Jan 17 00:20:53.705071 systemd-networkd[1368]: cali0d1c0959d08: Gained carrier Jan 17 00:20:53.728862 systemd[1]: Started cri-containerd-53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16.scope - libcontainer container 53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16. Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.497 [INFO][4410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0 coredns-66bc5c9577- kube-system a20c7c12-692e-4309-b8a8-a42052435b98 1021 0 2026-01-17 00:20:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f coredns-66bc5c9577-ntxlc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0d1c0959d08 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.497 [INFO][4410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.548 [INFO][4430] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" HandleID="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.548 [INFO][4430] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" HandleID="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"coredns-66bc5c9577-ntxlc", "timestamp":"2026-01-17 00:20:53.548732242 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.548 [INFO][4430] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.565 [INFO][4430] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.565 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.625 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.642 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.656 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.661 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.666 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.666 [INFO][4430] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.670 [INFO][4430] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.681 [INFO][4430] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.693 [INFO][4430] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.6/26] block=192.168.59.0/26 handle="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.694 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.6/26] handle="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.694 [INFO][4430] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:53.760943 containerd[1481]: 2026-01-17 00:20:53.694 [INFO][4430] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.6/26] IPv6=[] ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" HandleID="k8s-pod-network.c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.761940 containerd[1481]: 2026-01-17 00:20:53.698 [INFO][4410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a20c7c12-692e-4309-b8a8-a42052435b98", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"coredns-66bc5c9577-ntxlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d1c0959d08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:53.761940 containerd[1481]: 2026-01-17 00:20:53.699 [INFO][4410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.6/32] ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.761940 containerd[1481]: 2026-01-17 00:20:53.699 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d1c0959d08 ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.761940 containerd[1481]: 2026-01-17 00:20:53.705 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.763710 containerd[1481]: 2026-01-17 00:20:53.710 [INFO][4410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a20c7c12-692e-4309-b8a8-a42052435b98", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca", Pod:"coredns-66bc5c9577-ntxlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d1c0959d08", MAC:"72:a1:85:86:4a:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:53.763710 containerd[1481]: 2026-01-17 00:20:53.753 [INFO][4410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca" Namespace="kube-system" Pod="coredns-66bc5c9577-ntxlc" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:20:53.834147 containerd[1481]: time="2026-01-17T00:20:53.832328237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:53.834147 containerd[1481]: time="2026-01-17T00:20:53.832433755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:53.834147 containerd[1481]: time="2026-01-17T00:20:53.832456507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:53.834147 containerd[1481]: time="2026-01-17T00:20:53.832763829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:53.873092 systemd[1]: Started cri-containerd-c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca.scope - libcontainer container c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca. Jan 17 00:20:53.896116 containerd[1481]: time="2026-01-17T00:20:53.895958043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqpfh,Uid:3b5332e7-48f1-496f-8faa-d235dc24f5d8,Namespace:kube-system,Attempt:1,} returns sandbox id \"53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16\"" Jan 17 00:20:53.900082 kubelet[2532]: E0117 00:20:53.899952 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:53.916342 containerd[1481]: time="2026-01-17T00:20:53.915689407Z" level=info msg="CreateContainer within sandbox \"53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:20:53.961917 containerd[1481]: time="2026-01-17T00:20:53.961853776Z" level=info msg="CreateContainer within sandbox \"53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9d3934acaca2b6e8ce8466770478807df6d40ee2c788b677a465fd9f32440b7\"" Jan 17 00:20:53.963547 containerd[1481]: time="2026-01-17T00:20:53.963479691Z" level=info msg="StartContainer for \"b9d3934acaca2b6e8ce8466770478807df6d40ee2c788b677a465fd9f32440b7\"" Jan 17 00:20:53.975735 containerd[1481]: time="2026-01-17T00:20:53.975694867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ntxlc,Uid:a20c7c12-692e-4309-b8a8-a42052435b98,Namespace:kube-system,Attempt:1,} returns sandbox id \"c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca\"" Jan 17 00:20:53.977385 kubelet[2532]: E0117 00:20:53.977346 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:53.988438 containerd[1481]: time="2026-01-17T00:20:53.988374847Z" level=info msg="CreateContainer within sandbox \"c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:20:54.010879 containerd[1481]: time="2026-01-17T00:20:54.010525667Z" level=info msg="CreateContainer within sandbox \"c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9095436eee37ac82617091070d4818146d0e0af5d33013850bdfe2b23e55769f\"" Jan 17 00:20:54.012908 containerd[1481]: time="2026-01-17T00:20:54.012852839Z" level=info msg="StartContainer for \"9095436eee37ac82617091070d4818146d0e0af5d33013850bdfe2b23e55769f\"" Jan 17 00:20:54.025582 systemd[1]: Started cri-containerd-b9d3934acaca2b6e8ce8466770478807df6d40ee2c788b677a465fd9f32440b7.scope - libcontainer container b9d3934acaca2b6e8ce8466770478807df6d40ee2c788b677a465fd9f32440b7. Jan 17 00:20:54.074626 systemd[1]: Started cri-containerd-9095436eee37ac82617091070d4818146d0e0af5d33013850bdfe2b23e55769f.scope - libcontainer container 9095436eee37ac82617091070d4818146d0e0af5d33013850bdfe2b23e55769f. Jan 17 00:20:54.089521 containerd[1481]: time="2026-01-17T00:20:54.088768632Z" level=info msg="StartContainer for \"b9d3934acaca2b6e8ce8466770478807df6d40ee2c788b677a465fd9f32440b7\" returns successfully" Jan 17 00:20:54.126456 containerd[1481]: time="2026-01-17T00:20:54.126390822Z" level=info msg="StartContainer for \"9095436eee37ac82617091070d4818146d0e0af5d33013850bdfe2b23e55769f\" returns successfully" Jan 17 00:20:54.165542 containerd[1481]: time="2026-01-17T00:20:54.165186942Z" level=info msg="StopPodSandbox for \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\"" Jan 17 00:20:54.170504 containerd[1481]: time="2026-01-17T00:20:54.170424025Z" level=info msg="StopPodSandbox for \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\"" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.375 [INFO][4628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.376 [INFO][4628] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" iface="eth0" netns="/var/run/netns/cni-8b835fad-2f15-df9b-a32a-58f7268d4fee" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.376 [INFO][4628] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" iface="eth0" netns="/var/run/netns/cni-8b835fad-2f15-df9b-a32a-58f7268d4fee" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.377 [INFO][4628] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" iface="eth0" netns="/var/run/netns/cni-8b835fad-2f15-df9b-a32a-58f7268d4fee" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.377 [INFO][4628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.377 [INFO][4628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.485 [INFO][4648] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.486 [INFO][4648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.486 [INFO][4648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.504 [WARNING][4648] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.504 [INFO][4648] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.509 [INFO][4648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:54.519185 containerd[1481]: 2026-01-17 00:20:54.512 [INFO][4628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:20:54.521788 containerd[1481]: time="2026-01-17T00:20:54.519956901Z" level=info msg="TearDown network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\" successfully" Jan 17 00:20:54.521788 containerd[1481]: time="2026-01-17T00:20:54.520001066Z" level=info msg="StopPodSandbox for \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\" returns successfully" Jan 17 00:20:54.525574 systemd[1]: run-netns-cni\x2d8b835fad\x2d2f15\x2ddf9b\x2da32a\x2d58f7268d4fee.mount: Deactivated successfully. Jan 17 00:20:54.531016 containerd[1481]: time="2026-01-17T00:20:54.530967260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-s5fxr,Uid:b34f9844-1f24-4158-8f3c-e8308ca5c340,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.364 [INFO][4630] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.365 [INFO][4630] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" iface="eth0" netns="/var/run/netns/cni-d0ddbe30-2148-6c16-5801-53717167bcc3" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.365 [INFO][4630] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" iface="eth0" netns="/var/run/netns/cni-d0ddbe30-2148-6c16-5801-53717167bcc3" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.365 [INFO][4630] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" iface="eth0" netns="/var/run/netns/cni-d0ddbe30-2148-6c16-5801-53717167bcc3" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.365 [INFO][4630] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.365 [INFO][4630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.491 [INFO][4645] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.492 [INFO][4645] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.509 [INFO][4645] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.543 [WARNING][4645] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.543 [INFO][4645] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.547 [INFO][4645] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:54.567170 containerd[1481]: 2026-01-17 00:20:54.553 [INFO][4630] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:20:54.567170 containerd[1481]: time="2026-01-17T00:20:54.565466094Z" level=info msg="TearDown network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\" successfully" Jan 17 00:20:54.567170 containerd[1481]: time="2026-01-17T00:20:54.565496782Z" level=info msg="StopPodSandbox for \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\" returns successfully" Jan 17 00:20:54.574906 systemd[1]: run-netns-cni\x2dd0ddbe30\x2d2148\x2d6c16\x2d5801\x2d53717167bcc3.mount: Deactivated successfully. Jan 17 00:20:54.585000 containerd[1481]: time="2026-01-17T00:20:54.584945500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d567dbb6-j2ffv,Uid:75ed374b-4149-4248-8b00-b1cb0ceb9572,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:54.670754 kubelet[2532]: E0117 00:20:54.669723 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:54.710309 kubelet[2532]: I0117 00:20:54.708306 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fqpfh" podStartSLOduration=47.708247835 podStartE2EDuration="47.708247835s" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:54.705330926 +0000 UTC m=+52.885022241" watchObservedRunningTime="2026-01-17 00:20:54.708247835 +0000 UTC m=+52.887939148" Jan 17 00:20:54.715192 kubelet[2532]: E0117 00:20:54.715046 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:54.737292 kubelet[2532]: I0117 00:20:54.735200 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ntxlc" podStartSLOduration=47.735172082 podStartE2EDuration="47.735172082s" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:54.734124794 +0000 UTC m=+52.913816115" watchObservedRunningTime="2026-01-17 00:20:54.735172082 +0000 UTC m=+52.914863400" Jan 17 00:20:54.875570 systemd-networkd[1368]: cali0d1c0959d08: Gained IPv6LL Jan 17 00:20:54.974406 systemd-networkd[1368]: calib131e8d67ac: Link UP Jan 17 00:20:54.977420 systemd-networkd[1368]: calib131e8d67ac: Gained carrier Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.716 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0 calico-kube-controllers-67d567dbb6- calico-system 75ed374b-4149-4248-8b00-b1cb0ceb9572 1046 0 2026-01-17 00:20:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67d567dbb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f calico-kube-controllers-67d567dbb6-j2ffv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib131e8d67ac [] [] }} ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.718 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.834 [INFO][4689] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" HandleID="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.835 [INFO][4689] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" HandleID="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"calico-kube-controllers-67d567dbb6-j2ffv", "timestamp":"2026-01-17 00:20:54.834795606 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.835 [INFO][4689] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.835 [INFO][4689] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.835 [INFO][4689] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.851 [INFO][4689] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.864 [INFO][4689] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.874 [INFO][4689] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.883 [INFO][4689] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.893 [INFO][4689] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.893 [INFO][4689] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.900 [INFO][4689] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69 Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.928 [INFO][4689] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.953 [INFO][4689] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.7/26] block=192.168.59.0/26 handle="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.953 [INFO][4689] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.7/26] handle="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.009905 containerd[1481]: 2026-01-17 00:20:54.954 [INFO][4689] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:55.011866 containerd[1481]: 2026-01-17 00:20:54.954 [INFO][4689] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.7/26] IPv6=[] ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" HandleID="k8s-pod-network.cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:55.011866 containerd[1481]: 2026-01-17 00:20:54.961 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0", GenerateName:"calico-kube-controllers-67d567dbb6-", Namespace:"calico-system", SelfLink:"", UID:"75ed374b-4149-4248-8b00-b1cb0ceb9572", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d567dbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"calico-kube-controllers-67d567dbb6-j2ffv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib131e8d67ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:55.011866 containerd[1481]: 2026-01-17 00:20:54.964 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.7/32] ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:55.011866 containerd[1481]: 2026-01-17 00:20:54.964 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib131e8d67ac ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:55.011866 containerd[1481]: 2026-01-17 00:20:54.978 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:55.012094 containerd[1481]: 2026-01-17 00:20:54.981 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0", GenerateName:"calico-kube-controllers-67d567dbb6-", Namespace:"calico-system", SelfLink:"", UID:"75ed374b-4149-4248-8b00-b1cb0ceb9572", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d567dbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69", Pod:"calico-kube-controllers-67d567dbb6-j2ffv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib131e8d67ac", MAC:"b6:4c:c9:00:f8:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:55.012094 containerd[1481]: 2026-01-17 00:20:55.005 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69" Namespace="calico-system" Pod="calico-kube-controllers-67d567dbb6-j2ffv" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:20:55.061913 containerd[1481]: time="2026-01-17T00:20:55.059167289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:55.061913 containerd[1481]: time="2026-01-17T00:20:55.059289119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:55.061913 containerd[1481]: time="2026-01-17T00:20:55.059311740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:55.061913 containerd[1481]: time="2026-01-17T00:20:55.059431179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:55.108632 systemd[1]: Started cri-containerd-cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69.scope - libcontainer container cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69. Jan 17 00:20:55.169807 systemd-networkd[1368]: calicf2a810000e: Link UP Jan 17 00:20:55.173281 systemd-networkd[1368]: calicf2a810000e: Gained carrier Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:54.827 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0 calico-apiserver-599ddd4698- calico-apiserver b34f9844-1f24-4158-8f3c-e8308ca5c340 1047 0 2026-01-17 00:20:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599ddd4698 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8d0945b27f calico-apiserver-599ddd4698-s5fxr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf2a810000e [] [] }} ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:54.828 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:54.923 [INFO][4697] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" HandleID="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:54.926 [INFO][4697] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" HandleID="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032ac50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8d0945b27f", "pod":"calico-apiserver-599ddd4698-s5fxr", "timestamp":"2026-01-17 00:20:54.923936917 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8d0945b27f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:54.926 [INFO][4697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:54.954 [INFO][4697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:54.954 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8d0945b27f' Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.004 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.021 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.052 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.061 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.078 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.078 [INFO][4697] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.084 [INFO][4697] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288 Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.121 [INFO][4697] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.152 [INFO][4697] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.8/26] block=192.168.59.0/26 handle="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.153 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.8/26] handle="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" host="ci-4081.3.6-n-8d0945b27f" Jan 17 00:20:55.213696 containerd[1481]: 2026-01-17 00:20:55.153 [INFO][4697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:55.217490 containerd[1481]: 2026-01-17 00:20:55.153 [INFO][4697] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.8/26] IPv6=[] ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" HandleID="k8s-pod-network.afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:55.217490 containerd[1481]: 2026-01-17 00:20:55.164 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34f9844-1f24-4158-8f3c-e8308ca5c340", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"", Pod:"calico-apiserver-599ddd4698-s5fxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf2a810000e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:55.217490 containerd[1481]: 2026-01-17 00:20:55.165 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.8/32] ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:55.217490 containerd[1481]: 2026-01-17 00:20:55.165 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf2a810000e ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:55.217490 containerd[1481]: 2026-01-17 00:20:55.175 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:55.219036 containerd[1481]: 2026-01-17 00:20:55.176 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34f9844-1f24-4158-8f3c-e8308ca5c340", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288", Pod:"calico-apiserver-599ddd4698-s5fxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf2a810000e", MAC:"ca:dc:af:dc:eb:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:55.219036 containerd[1481]: 2026-01-17 00:20:55.203 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288" Namespace="calico-apiserver" Pod="calico-apiserver-599ddd4698-s5fxr" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:20:55.250975 containerd[1481]: time="2026-01-17T00:20:55.250131435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:55.252881 containerd[1481]: time="2026-01-17T00:20:55.250222779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:55.252881 containerd[1481]: time="2026-01-17T00:20:55.252321976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:55.252881 containerd[1481]: time="2026-01-17T00:20:55.252487985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:55.279531 systemd[1]: Started cri-containerd-afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288.scope - libcontainer container afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288. Jan 17 00:20:55.286640 kubelet[2532]: I0117 00:20:55.285735 2532 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:55.291201 kubelet[2532]: E0117 00:20:55.290249 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:55.326465 systemd-networkd[1368]: cali665a93d3483: Gained IPv6LL Jan 17 00:20:55.466155 systemd[1]: run-containerd-runc-k8s.io-0b7cfafa8820ad7ca41d64ea575ff9839e01d0e979583d349aa6ddf8a7d26f69-runc.ayQHJs.mount: Deactivated successfully. Jan 17 00:20:55.467518 containerd[1481]: time="2026-01-17T00:20:55.466814332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d567dbb6-j2ffv,Uid:75ed374b-4149-4248-8b00-b1cb0ceb9572,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69\"" Jan 17 00:20:55.473177 containerd[1481]: time="2026-01-17T00:20:55.472943882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:20:55.554860 containerd[1481]: time="2026-01-17T00:20:55.554798967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599ddd4698-s5fxr,Uid:b34f9844-1f24-4158-8f3c-e8308ca5c340,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288\"" Jan 17 00:20:55.702919 kubelet[2532]: E0117 00:20:55.702868 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:55.706539 kubelet[2532]: E0117 00:20:55.704291 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:55.706539 kubelet[2532]: E0117 00:20:55.705208 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:55.816850 containerd[1481]: time="2026-01-17T00:20:55.816616047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:55.817809 containerd[1481]: time="2026-01-17T00:20:55.817732973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:20:55.818121 containerd[1481]: time="2026-01-17T00:20:55.817860645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:20:55.818246 kubelet[2532]: E0117 00:20:55.818100 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:20:55.818246 kubelet[2532]: E0117 00:20:55.818183 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:20:55.818718 kubelet[2532]: E0117 00:20:55.818677 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-67d567dbb6-j2ffv_calico-system(75ed374b-4149-4248-8b00-b1cb0ceb9572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:55.818845 kubelet[2532]: E0117 00:20:55.818743 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:20:55.819955 containerd[1481]: time="2026-01-17T00:20:55.819583926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:20:56.159709 containerd[1481]: time="2026-01-17T00:20:56.158823366Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:56.163298 containerd[1481]: time="2026-01-17T00:20:56.162281359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:56.163298 containerd[1481]: time="2026-01-17T00:20:56.162274869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:20:56.163523 kubelet[2532]: E0117 00:20:56.162680 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:56.163523 kubelet[2532]: E0117 00:20:56.162742 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:56.163523 kubelet[2532]: E0117 00:20:56.162842 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-599ddd4698-s5fxr_calico-apiserver(b34f9844-1f24-4158-8f3c-e8308ca5c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:56.163523 kubelet[2532]: E0117 00:20:56.162889 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:20:56.472473 systemd-networkd[1368]: calib131e8d67ac: Gained IPv6LL Jan 17 00:20:56.706891 kubelet[2532]: E0117 00:20:56.706839 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:20:56.710654 kubelet[2532]: E0117 00:20:56.710595 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:20:56.710943 kubelet[2532]: E0117 00:20:56.710708 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:20:57.178629 systemd-networkd[1368]: calicf2a810000e: Gained IPv6LL Jan 17 00:21:02.134150 containerd[1481]: time="2026-01-17T00:21:02.134096227Z" level=info msg="StopPodSandbox for \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\"" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.223 [WARNING][4875] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b5332e7-48f1-496f-8faa-d235dc24f5d8", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16", Pod:"coredns-66bc5c9577-fqpfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali665a93d3483", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.224 [INFO][4875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.224 [INFO][4875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" iface="eth0" netns="" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.224 [INFO][4875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.224 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.283 [INFO][4885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.283 [INFO][4885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.283 [INFO][4885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.309 [WARNING][4885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.309 [INFO][4885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.320 [INFO][4885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:02.328444 containerd[1481]: 2026-01-17 00:21:02.325 [INFO][4875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.328945 containerd[1481]: time="2026-01-17T00:21:02.328507275Z" level=info msg="TearDown network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\" successfully" Jan 17 00:21:02.328945 containerd[1481]: time="2026-01-17T00:21:02.328536128Z" level=info msg="StopPodSandbox for \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\" returns successfully" Jan 17 00:21:02.330763 containerd[1481]: time="2026-01-17T00:21:02.329557550Z" level=info msg="RemovePodSandbox for \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\"" Jan 17 00:21:02.332317 containerd[1481]: time="2026-01-17T00:21:02.332276343Z" level=info msg="Forcibly stopping sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\"" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.423 [WARNING][4899] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b5332e7-48f1-496f-8faa-d235dc24f5d8", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"53fdcbb4a4bb0670dffade45e7fdec71a07ea93e170928f7d2de322300b9df16", Pod:"coredns-66bc5c9577-fqpfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali665a93d3483", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.424 [INFO][4899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.424 [INFO][4899] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" iface="eth0" netns="" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.424 [INFO][4899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.424 [INFO][4899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.456 [INFO][4906] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.457 [INFO][4906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.457 [INFO][4906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.467 [WARNING][4906] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.467 [INFO][4906] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" HandleID="k8s-pod-network.4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--fqpfh-eth0" Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.473 [INFO][4906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:02.480510 containerd[1481]: 2026-01-17 00:21:02.476 [INFO][4899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd" Jan 17 00:21:02.480510 containerd[1481]: time="2026-01-17T00:21:02.480457130Z" level=info msg="TearDown network for sandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\" successfully" Jan 17 00:21:02.501305 containerd[1481]: time="2026-01-17T00:21:02.500115340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:02.501305 containerd[1481]: time="2026-01-17T00:21:02.500222114Z" level=info msg="RemovePodSandbox \"4bd93b6bcf45c249eb626714a92d5c5f2d0d60d3fe66ce80354493c57e5827fd\" returns successfully" Jan 17 00:21:02.502125 containerd[1481]: time="2026-01-17T00:21:02.501808087Z" level=info msg="StopPodSandbox for \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\"" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.601 [WARNING][4920] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cd0417c-a83c-4bf0-96f8-9680bbeb055b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b", Pod:"calico-apiserver-599ddd4698-rsvbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30162db12ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.602 [INFO][4920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.602 [INFO][4920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" iface="eth0" netns="" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.602 [INFO][4920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.602 [INFO][4920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.644 [INFO][4928] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.645 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.645 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.658 [WARNING][4928] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.658 [INFO][4928] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.660 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:02.667332 containerd[1481]: 2026-01-17 00:21:02.663 [INFO][4920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.667332 containerd[1481]: time="2026-01-17T00:21:02.667120898Z" level=info msg="TearDown network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\" successfully" Jan 17 00:21:02.667332 containerd[1481]: time="2026-01-17T00:21:02.667150134Z" level=info msg="StopPodSandbox for \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\" returns successfully" Jan 17 00:21:02.668552 containerd[1481]: time="2026-01-17T00:21:02.668439429Z" level=info msg="RemovePodSandbox for \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\"" Jan 17 00:21:02.668552 containerd[1481]: time="2026-01-17T00:21:02.668481626Z" level=info msg="Forcibly stopping sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\"" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.729 [WARNING][4942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cd0417c-a83c-4bf0-96f8-9680bbeb055b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"5917cebbc477933f0a010c321f0ec051519287af4678207db7ba34a21888449b", Pod:"calico-apiserver-599ddd4698-rsvbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30162db12ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.730 [INFO][4942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.730 [INFO][4942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" iface="eth0" netns="" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.730 [INFO][4942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.730 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.761 [INFO][4950] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.762 [INFO][4950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.762 [INFO][4950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.770 [WARNING][4950] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.770 [INFO][4950] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" HandleID="k8s-pod-network.e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--rsvbr-eth0" Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.772 [INFO][4950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:02.776699 containerd[1481]: 2026-01-17 00:21:02.774 [INFO][4942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812" Jan 17 00:21:02.776699 containerd[1481]: time="2026-01-17T00:21:02.776651586Z" level=info msg="TearDown network for sandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\" successfully" Jan 17 00:21:02.781355 containerd[1481]: time="2026-01-17T00:21:02.781294301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:02.781521 containerd[1481]: time="2026-01-17T00:21:02.781405655Z" level=info msg="RemovePodSandbox \"e0e3989ceaada22be553b3ee656b881d0c1e4f208b78d90a7e6f765c614db812\" returns successfully" Jan 17 00:21:02.783461 containerd[1481]: time="2026-01-17T00:21:02.782056633Z" level=info msg="StopPodSandbox for \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\"" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.824 [WARNING][4965] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34f9844-1f24-4158-8f3c-e8308ca5c340", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288", Pod:"calico-apiserver-599ddd4698-s5fxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf2a810000e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.825 [INFO][4965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.825 [INFO][4965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" iface="eth0" netns="" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.825 [INFO][4965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.825 [INFO][4965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.861 [INFO][4972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.861 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.861 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.878 [WARNING][4972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.878 [INFO][4972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.888 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:02.893937 containerd[1481]: 2026-01-17 00:21:02.891 [INFO][4965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:02.896511 containerd[1481]: time="2026-01-17T00:21:02.893982442Z" level=info msg="TearDown network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\" successfully" Jan 17 00:21:02.896511 containerd[1481]: time="2026-01-17T00:21:02.894009977Z" level=info msg="StopPodSandbox for \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\" returns successfully" Jan 17 00:21:02.896511 containerd[1481]: time="2026-01-17T00:21:02.895750568Z" level=info msg="RemovePodSandbox for \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\"" Jan 17 00:21:02.896511 containerd[1481]: time="2026-01-17T00:21:02.895783861Z" level=info msg="Forcibly stopping sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\"" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:02.960 [WARNING][4987] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0", GenerateName:"calico-apiserver-599ddd4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"b34f9844-1f24-4158-8f3c-e8308ca5c340", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599ddd4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"afb046e73b0ef83482257eecf7de5cec208cec5a0d4bd374fe653928b2120288", Pod:"calico-apiserver-599ddd4698-s5fxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf2a810000e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:02.961 [INFO][4987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:02.961 [INFO][4987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" iface="eth0" netns="" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:02.961 [INFO][4987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:02.961 [INFO][4987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:03.006 [INFO][4994] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:03.006 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:03.006 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:03.014 [WARNING][4994] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:03.014 [INFO][4994] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" HandleID="k8s-pod-network.a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--apiserver--599ddd4698--s5fxr-eth0" Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:03.017 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.024840 containerd[1481]: 2026-01-17 00:21:03.019 [INFO][4987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0" Jan 17 00:21:03.025633 containerd[1481]: time="2026-01-17T00:21:03.024903571Z" level=info msg="TearDown network for sandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\" successfully" Jan 17 00:21:03.029776 containerd[1481]: time="2026-01-17T00:21:03.029608847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:03.029776 containerd[1481]: time="2026-01-17T00:21:03.029683754Z" level=info msg="RemovePodSandbox \"a09cd9fafc1b4e72f16c8b92b32e23123d788bfe6a5ad457c449aa8a336f52b0\" returns successfully" Jan 17 00:21:03.032603 containerd[1481]: time="2026-01-17T00:21:03.031947832Z" level=info msg="StopPodSandbox for \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\"" Jan 17 00:21:03.164976 containerd[1481]: time="2026-01-17T00:21:03.163963430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.110 [WARNING][5008] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"785ca1fd-8ad2-4e63-be23-ced8350e2045", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3", Pod:"csi-node-driver-pbn95", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89ecd7ff38e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.110 [INFO][5008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.110 [INFO][5008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" iface="eth0" netns="" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.110 [INFO][5008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.110 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.151 [INFO][5015] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.151 [INFO][5015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.151 [INFO][5015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.163 [WARNING][5015] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.164 [INFO][5015] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.169 [INFO][5015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.176926 containerd[1481]: 2026-01-17 00:21:03.174 [INFO][5008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.178388 containerd[1481]: time="2026-01-17T00:21:03.176995531Z" level=info msg="TearDown network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\" successfully" Jan 17 00:21:03.178388 containerd[1481]: time="2026-01-17T00:21:03.177123053Z" level=info msg="StopPodSandbox for \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\" returns successfully" Jan 17 00:21:03.181927 containerd[1481]: time="2026-01-17T00:21:03.181884528Z" level=info msg="RemovePodSandbox for \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\"" Jan 17 00:21:03.182017 containerd[1481]: time="2026-01-17T00:21:03.181940476Z" level=info msg="Forcibly stopping sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\"" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.255 [WARNING][5029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"785ca1fd-8ad2-4e63-be23-ced8350e2045", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"ec0076aec14a83131ee098f954b31e2848ff74ad31abc67548cc8dc3465c7ac3", Pod:"csi-node-driver-pbn95", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89ecd7ff38e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.256 [INFO][5029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.256 [INFO][5029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" iface="eth0" netns="" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.256 [INFO][5029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.256 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.291 [INFO][5036] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.292 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.292 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.302 [WARNING][5036] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.302 [INFO][5036] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" HandleID="k8s-pod-network.062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-csi--node--driver--pbn95-eth0" Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.304 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.310697 containerd[1481]: 2026-01-17 00:21:03.306 [INFO][5029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d" Jan 17 00:21:03.311292 containerd[1481]: time="2026-01-17T00:21:03.310994595Z" level=info msg="TearDown network for sandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\" successfully" Jan 17 00:21:03.317375 containerd[1481]: time="2026-01-17T00:21:03.317113738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:03.317375 containerd[1481]: time="2026-01-17T00:21:03.317410233Z" level=info msg="RemovePodSandbox \"062e2fb87e9909033a7e4477bb22f6e86d2f9a2a31f5eac35b6ad5abfef77b0d\" returns successfully" Jan 17 00:21:03.323815 containerd[1481]: time="2026-01-17T00:21:03.323757104Z" level=info msg="StopPodSandbox for \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\"" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.378 [WARNING][5051] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a3c98a2f-bcb2-4019-8b39-98c736ccd677", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3", Pod:"goldmane-7c778bb748-m249n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf368edc330", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.379 [INFO][5051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.379 [INFO][5051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" iface="eth0" netns="" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.379 [INFO][5051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.379 [INFO][5051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.427 [INFO][5058] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.427 [INFO][5058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.427 [INFO][5058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.436 [WARNING][5058] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.437 [INFO][5058] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.439 [INFO][5058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.444589 containerd[1481]: 2026-01-17 00:21:03.442 [INFO][5051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.445723 containerd[1481]: time="2026-01-17T00:21:03.444649683Z" level=info msg="TearDown network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\" successfully" Jan 17 00:21:03.445723 containerd[1481]: time="2026-01-17T00:21:03.444685589Z" level=info msg="StopPodSandbox for \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\" returns successfully" Jan 17 00:21:03.445723 containerd[1481]: time="2026-01-17T00:21:03.445594413Z" level=info msg="RemovePodSandbox for \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\"" Jan 17 00:21:03.445723 containerd[1481]: time="2026-01-17T00:21:03.445626615Z" level=info msg="Forcibly stopping sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\"" Jan 17 00:21:03.506854 containerd[1481]: time="2026-01-17T00:21:03.506295134Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:03.508946 containerd[1481]: time="2026-01-17T00:21:03.507785487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:03.508946 containerd[1481]: time="2026-01-17T00:21:03.507857845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:03.509599 kubelet[2532]: E0117 00:21:03.509297 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:03.509599 kubelet[2532]: E0117 00:21:03.509348 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:03.509599 kubelet[2532]: E0117 00:21:03.509569 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-599ddd4698-rsvbr_calico-apiserver(7cd0417c-a83c-4bf0-96f8-9680bbeb055b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:03.510072 kubelet[2532]: E0117 00:21:03.509634 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:21:03.514322 containerd[1481]: time="2026-01-17T00:21:03.512502328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.504 [WARNING][5072] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a3c98a2f-bcb2-4019-8b39-98c736ccd677", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"b3724cf7e6788d1bdf91de6f40e54da258338301b0fbfd2557758830f6c1cdc3", Pod:"goldmane-7c778bb748-m249n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf368edc330", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.505 [INFO][5072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.505 [INFO][5072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" iface="eth0" netns="" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.505 [INFO][5072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.505 [INFO][5072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.562 [INFO][5079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.562 [INFO][5079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.563 [INFO][5079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.585 [WARNING][5079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.585 [INFO][5079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" HandleID="k8s-pod-network.ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Workload="ci--4081.3.6--n--8d0945b27f-k8s-goldmane--7c778bb748--m249n-eth0" Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.588 [INFO][5079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.595194 containerd[1481]: 2026-01-17 00:21:03.590 [INFO][5072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9" Jan 17 00:21:03.596100 containerd[1481]: time="2026-01-17T00:21:03.595241503Z" level=info msg="TearDown network for sandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\" successfully" Jan 17 00:21:03.603627 containerd[1481]: time="2026-01-17T00:21:03.603559959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:03.603763 containerd[1481]: time="2026-01-17T00:21:03.603679459Z" level=info msg="RemovePodSandbox \"ee04f0da860bbe55771d936c34dbd64b6a7af242fa0ad04030e1cdbc77a92be9\" returns successfully" Jan 17 00:21:03.607370 containerd[1481]: time="2026-01-17T00:21:03.606950946Z" level=info msg="StopPodSandbox for \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\"" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.680 [WARNING][5093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a20c7c12-692e-4309-b8a8-a42052435b98", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca", Pod:"coredns-66bc5c9577-ntxlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d1c0959d08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.680 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.680 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" iface="eth0" netns="" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.680 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.681 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.716 [INFO][5101] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.716 [INFO][5101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.716 [INFO][5101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.747 [WARNING][5101] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.747 [INFO][5101] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.751 [INFO][5101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.755786 containerd[1481]: 2026-01-17 00:21:03.753 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.757100 containerd[1481]: time="2026-01-17T00:21:03.755862804Z" level=info msg="TearDown network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\" successfully" Jan 17 00:21:03.757100 containerd[1481]: time="2026-01-17T00:21:03.755901074Z" level=info msg="StopPodSandbox for \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\" returns successfully" Jan 17 00:21:03.757100 containerd[1481]: time="2026-01-17T00:21:03.756416821Z" level=info msg="RemovePodSandbox for \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\"" Jan 17 00:21:03.757100 containerd[1481]: time="2026-01-17T00:21:03.756445742Z" level=info msg="Forcibly stopping sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\"" Jan 17 00:21:03.858232 containerd[1481]: time="2026-01-17T00:21:03.858043520Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:03.859183 containerd[1481]: time="2026-01-17T00:21:03.859067138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:21:03.861334 containerd[1481]: time="2026-01-17T00:21:03.859144775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:03.861411 kubelet[2532]: E0117 00:21:03.859420 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:03.861411 kubelet[2532]: E0117 00:21:03.859468 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:03.861411 kubelet[2532]: E0117 00:21:03.859546 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-m249n_calico-system(a3c98a2f-bcb2-4019-8b39-98c736ccd677): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:03.861411 kubelet[2532]: E0117 00:21:03.859578 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.814 [WARNING][5116] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a20c7c12-692e-4309-b8a8-a42052435b98", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"c5fb2531b60d21cab6021d1a610dceebed59db45384046228a8fbb0399b68fca", Pod:"coredns-66bc5c9577-ntxlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d1c0959d08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.814 [INFO][5116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.814 [INFO][5116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" iface="eth0" netns="" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.814 [INFO][5116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.814 [INFO][5116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.851 [INFO][5123] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.851 [INFO][5123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.851 [INFO][5123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.861 [WARNING][5123] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.861 [INFO][5123] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" HandleID="k8s-pod-network.71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Workload="ci--4081.3.6--n--8d0945b27f-k8s-coredns--66bc5c9577--ntxlc-eth0" Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.865 [INFO][5123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.872069 containerd[1481]: 2026-01-17 00:21:03.868 [INFO][5116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d" Jan 17 00:21:03.872560 containerd[1481]: time="2026-01-17T00:21:03.872122458Z" level=info msg="TearDown network for sandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\" successfully" Jan 17 00:21:03.877111 containerd[1481]: time="2026-01-17T00:21:03.877036839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:03.877282 containerd[1481]: time="2026-01-17T00:21:03.877160880Z" level=info msg="RemovePodSandbox \"71d359a53022665a88480a107709ecf08fe06b31266584a3794b66dcf883408d\" returns successfully" Jan 17 00:21:03.878233 containerd[1481]: time="2026-01-17T00:21:03.878201184Z" level=info msg="StopPodSandbox for \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\"" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.927 [WARNING][5137] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0", GenerateName:"calico-kube-controllers-67d567dbb6-", Namespace:"calico-system", SelfLink:"", UID:"75ed374b-4149-4248-8b00-b1cb0ceb9572", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d567dbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69", Pod:"calico-kube-controllers-67d567dbb6-j2ffv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib131e8d67ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.927 [INFO][5137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.927 [INFO][5137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" iface="eth0" netns="" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.927 [INFO][5137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.927 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.960 [INFO][5144] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.960 [INFO][5144] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.960 [INFO][5144] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.972 [WARNING][5144] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.972 [INFO][5144] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.976 [INFO][5144] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:03.980361 containerd[1481]: 2026-01-17 00:21:03.978 [INFO][5137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:03.981760 containerd[1481]: time="2026-01-17T00:21:03.980412046Z" level=info msg="TearDown network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\" successfully" Jan 17 00:21:03.981760 containerd[1481]: time="2026-01-17T00:21:03.980438004Z" level=info msg="StopPodSandbox for \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\" returns successfully" Jan 17 00:21:03.981760 containerd[1481]: time="2026-01-17T00:21:03.981048744Z" level=info msg="RemovePodSandbox for \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\"" Jan 17 00:21:03.981760 containerd[1481]: time="2026-01-17T00:21:03.981114265Z" level=info msg="Forcibly stopping sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\"" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.034 [WARNING][5159] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0", GenerateName:"calico-kube-controllers-67d567dbb6-", Namespace:"calico-system", SelfLink:"", UID:"75ed374b-4149-4248-8b00-b1cb0ceb9572", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d567dbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8d0945b27f", ContainerID:"cf675bdf80984123feac5a18931564e18a69ee6c52c07ec841643b5d58a6da69", Pod:"calico-kube-controllers-67d567dbb6-j2ffv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib131e8d67ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.034 [INFO][5159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.034 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" iface="eth0" netns="" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.034 [INFO][5159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.034 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.076 [INFO][5166] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.076 [INFO][5166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.076 [INFO][5166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.087 [WARNING][5166] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.087 [INFO][5166] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" HandleID="k8s-pod-network.1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Workload="ci--4081.3.6--n--8d0945b27f-k8s-calico--kube--controllers--67d567dbb6--j2ffv-eth0" Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.090 [INFO][5166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:04.096362 containerd[1481]: 2026-01-17 00:21:04.092 [INFO][5159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a" Jan 17 00:21:04.096362 containerd[1481]: time="2026-01-17T00:21:04.095319402Z" level=info msg="TearDown network for sandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\" successfully" Jan 17 00:21:04.098431 containerd[1481]: time="2026-01-17T00:21:04.098385694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:04.098431 containerd[1481]: time="2026-01-17T00:21:04.098455671Z" level=info msg="RemovePodSandbox \"1c201bd47e72d7d632f255b2959f34562e9fa9f5d67eb513df60fa5d422f490a\" returns successfully" Jan 17 00:21:04.099031 containerd[1481]: time="2026-01-17T00:21:04.098957197Z" level=info msg="StopPodSandbox for \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\"" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.149 [WARNING][5180] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.150 [INFO][5180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.150 [INFO][5180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" iface="eth0" netns="" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.150 [INFO][5180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.150 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.205 [INFO][5187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.205 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.205 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.227 [WARNING][5187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.227 [INFO][5187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.231 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:04.240030 containerd[1481]: 2026-01-17 00:21:04.236 [INFO][5180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.240030 containerd[1481]: time="2026-01-17T00:21:04.239832307Z" level=info msg="TearDown network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\" successfully" Jan 17 00:21:04.240030 containerd[1481]: time="2026-01-17T00:21:04.239875212Z" level=info msg="StopPodSandbox for \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\" returns successfully" Jan 17 00:21:04.244141 containerd[1481]: time="2026-01-17T00:21:04.242030476Z" level=info msg="RemovePodSandbox for \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\"" Jan 17 00:21:04.244141 containerd[1481]: time="2026-01-17T00:21:04.242075163Z" level=info msg="Forcibly stopping sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\"" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.339 [WARNING][5201] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" WorkloadEndpoint="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.339 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.339 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" iface="eth0" netns="" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.339 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.339 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.387 [INFO][5208] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.389 [INFO][5208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.389 [INFO][5208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.407 [WARNING][5208] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.408 [INFO][5208] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" HandleID="k8s-pod-network.e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Workload="ci--4081.3.6--n--8d0945b27f-k8s-whisker--559c9bff5c--pgg2b-eth0" Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.413 [INFO][5208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:04.421905 containerd[1481]: 2026-01-17 00:21:04.418 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689" Jan 17 00:21:04.422671 containerd[1481]: time="2026-01-17T00:21:04.422016228Z" level=info msg="TearDown network for sandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\" successfully" Jan 17 00:21:04.425744 containerd[1481]: time="2026-01-17T00:21:04.425642022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:04.425744 containerd[1481]: time="2026-01-17T00:21:04.425741559Z" level=info msg="RemovePodSandbox \"e58411212574d691e6f7adf0b5cac7a1ac75650fe92e816fbe275185fce24689\" returns successfully" Jan 17 00:21:05.161234 containerd[1481]: time="2026-01-17T00:21:05.161159251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:21:05.517982 containerd[1481]: time="2026-01-17T00:21:05.516893747Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:05.517982 containerd[1481]: time="2026-01-17T00:21:05.517713259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:21:05.517982 containerd[1481]: time="2026-01-17T00:21:05.517797332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:21:05.519670 kubelet[2532]: E0117 00:21:05.518719 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:05.519670 kubelet[2532]: E0117 00:21:05.518766 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:05.519670 kubelet[2532]: E0117 00:21:05.518852 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b849fdd9-cjl8m_calico-system(8f412fc1-0816-4220-80a1-194b624badc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:05.527704 systemd[1]: Started sshd@7-165.232.147.124:22-4.153.228.146:43866.service - OpenSSH per-connection server daemon (4.153.228.146:43866). Jan 17 00:21:05.545679 containerd[1481]: time="2026-01-17T00:21:05.545617910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:21:05.894398 containerd[1481]: time="2026-01-17T00:21:05.894290004Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:05.896683 containerd[1481]: time="2026-01-17T00:21:05.896373454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:21:05.896683 containerd[1481]: time="2026-01-17T00:21:05.896506761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:05.899029 kubelet[2532]: E0117 00:21:05.897320 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:05.912965 kubelet[2532]: E0117 00:21:05.899042 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:05.912965 kubelet[2532]: E0117 00:21:05.912589 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b849fdd9-cjl8m_calico-system(8f412fc1-0816-4220-80a1-194b624badc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:05.912965 kubelet[2532]: E0117 00:21:05.912657 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8" Jan 17 00:21:06.049875 sshd[5215]: Accepted publickey for core from 4.153.228.146 port 43866 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:06.052135 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:06.063496 systemd-logind[1452]: New session 8 of user core. Jan 17 00:21:06.069530 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:21:07.058545 sshd[5215]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:07.062978 systemd[1]: sshd@7-165.232.147.124:22-4.153.228.146:43866.service: Deactivated successfully. Jan 17 00:21:07.065886 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:21:07.068502 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:21:07.070069 systemd-logind[1452]: Removed session 8. Jan 17 00:21:07.167748 containerd[1481]: time="2026-01-17T00:21:07.167610379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:21:07.499747 containerd[1481]: time="2026-01-17T00:21:07.498764580Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:07.501266 containerd[1481]: time="2026-01-17T00:21:07.500853995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:21:07.501266 containerd[1481]: time="2026-01-17T00:21:07.501044374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:21:07.503664 kubelet[2532]: E0117 00:21:07.502166 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:07.503664 kubelet[2532]: E0117 00:21:07.502378 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:07.503664 kubelet[2532]: E0117 00:21:07.502492 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:07.506582 containerd[1481]: time="2026-01-17T00:21:07.506122049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:21:07.828479 containerd[1481]: time="2026-01-17T00:21:07.828309920Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:07.829568 containerd[1481]: time="2026-01-17T00:21:07.829398044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:21:07.829568 containerd[1481]: time="2026-01-17T00:21:07.829434468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:21:07.830746 kubelet[2532]: E0117 00:21:07.829982 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:07.830746 kubelet[2532]: E0117 00:21:07.830063 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:07.830746 kubelet[2532]: E0117 00:21:07.830171 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:07.831024 kubelet[2532]: E0117 00:21:07.830237 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:21:08.163070 containerd[1481]: time="2026-01-17T00:21:08.163011733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:08.501393 containerd[1481]: time="2026-01-17T00:21:08.499563837Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:08.503363 containerd[1481]: time="2026-01-17T00:21:08.503279892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:08.503363 containerd[1481]: time="2026-01-17T00:21:08.503289814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:08.505438 kubelet[2532]: E0117 00:21:08.504424 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:08.505438 kubelet[2532]: E0117 00:21:08.504544 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:08.505438 kubelet[2532]: E0117 00:21:08.504658 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-599ddd4698-s5fxr_calico-apiserver(b34f9844-1f24-4158-8f3c-e8308ca5c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:08.505438 kubelet[2532]: E0117 00:21:08.504713 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:21:11.165031 containerd[1481]: time="2026-01-17T00:21:11.164984294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:21:11.512199 containerd[1481]: time="2026-01-17T00:21:11.511997184Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:11.514803 containerd[1481]: time="2026-01-17T00:21:11.513643639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:21:11.514803 containerd[1481]: time="2026-01-17T00:21:11.513690504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:11.516139 kubelet[2532]: E0117 00:21:11.514023 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:11.516139 kubelet[2532]: E0117 00:21:11.514093 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:11.516139 kubelet[2532]: E0117 00:21:11.514197 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-67d567dbb6-j2ffv_calico-system(75ed374b-4149-4248-8b00-b1cb0ceb9572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:11.516139 kubelet[2532]: E0117 00:21:11.514249 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:21:12.123194 systemd[1]: Started sshd@8-165.232.147.124:22-4.153.228.146:43868.service - OpenSSH per-connection server daemon (4.153.228.146:43868). Jan 17 00:21:12.172598 kubelet[2532]: E0117 00:21:12.172551 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:21:12.563297 sshd[5250]: Accepted publickey for core from 4.153.228.146 port 43868 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:12.566164 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:12.578002 systemd-logind[1452]: New session 9 of user core. Jan 17 00:21:12.583732 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:21:12.990750 sshd[5250]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:12.995486 systemd[1]: sshd@8-165.232.147.124:22-4.153.228.146:43868.service: Deactivated successfully. Jan 17 00:21:12.999566 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:21:13.000857 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:21:13.003130 systemd-logind[1452]: Removed session 9. Jan 17 00:21:17.160733 kubelet[2532]: E0117 00:21:17.160666 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:21:18.084629 systemd[1]: Started sshd@9-165.232.147.124:22-4.153.228.146:39246.service - OpenSSH per-connection server daemon (4.153.228.146:39246). Jan 17 00:21:18.162017 kubelet[2532]: E0117 00:21:18.161950 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:21:18.524288 sshd[5268]: Accepted publickey for core from 4.153.228.146 port 39246 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:18.526001 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:18.535861 systemd-logind[1452]: New session 10 of user core. Jan 17 00:21:18.541519 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:21:18.954149 sshd[5268]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:18.961191 systemd[1]: sshd@9-165.232.147.124:22-4.153.228.146:39246.service: Deactivated successfully. Jan 17 00:21:18.966584 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:21:18.970105 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:21:18.973606 systemd-logind[1452]: Removed session 10. Jan 17 00:21:19.038951 systemd[1]: Started sshd@10-165.232.147.124:22-4.153.228.146:39256.service - OpenSSH per-connection server daemon (4.153.228.146:39256). Jan 17 00:21:19.466611 sshd[5283]: Accepted publickey for core from 4.153.228.146 port 39256 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:19.468912 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:19.475321 systemd-logind[1452]: New session 11 of user core. Jan 17 00:21:19.481534 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:21:20.000787 sshd[5283]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:20.005630 systemd[1]: sshd@10-165.232.147.124:22-4.153.228.146:39256.service: Deactivated successfully. Jan 17 00:21:20.010927 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:21:20.014145 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:21:20.019226 systemd-logind[1452]: Removed session 11. Jan 17 00:21:20.080386 systemd[1]: Started sshd@11-165.232.147.124:22-4.153.228.146:39272.service - OpenSSH per-connection server daemon (4.153.228.146:39272). Jan 17 00:21:20.506342 sshd[5294]: Accepted publickey for core from 4.153.228.146 port 39272 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:20.508620 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:20.516338 systemd-logind[1452]: New session 12 of user core. Jan 17 00:21:20.522945 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:21:20.955915 sshd[5294]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:20.963002 systemd[1]: sshd@11-165.232.147.124:22-4.153.228.146:39272.service: Deactivated successfully. Jan 17 00:21:20.969672 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:21:20.975040 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:21:20.978448 systemd-logind[1452]: Removed session 12. Jan 17 00:21:21.163079 kubelet[2532]: E0117 00:21:21.163023 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8" Jan 17 00:21:22.165895 kubelet[2532]: E0117 00:21:22.165811 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:21:22.171407 kubelet[2532]: E0117 00:21:22.171326 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:21:23.161713 kubelet[2532]: E0117 00:21:23.161656 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:21:25.160039 kubelet[2532]: E0117 00:21:25.159983 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:21:26.050800 systemd[1]: Started sshd@12-165.232.147.124:22-4.153.228.146:35754.service - OpenSSH per-connection server daemon (4.153.228.146:35754). Jan 17 00:21:26.163889 kubelet[2532]: E0117 00:21:26.163594 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:21:26.579656 sshd[5331]: Accepted publickey for core from 4.153.228.146 port 35754 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:26.584746 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:26.593815 systemd-logind[1452]: New session 13 of user core. Jan 17 00:21:26.603555 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:21:27.036388 sshd[5331]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:27.045143 systemd[1]: sshd@12-165.232.147.124:22-4.153.228.146:35754.service: Deactivated successfully. Jan 17 00:21:27.049884 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:21:27.052518 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:21:27.055473 systemd-logind[1452]: Removed session 13. Jan 17 00:21:29.161990 containerd[1481]: time="2026-01-17T00:21:29.161915341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:21:29.527990 containerd[1481]: time="2026-01-17T00:21:29.527844403Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:29.529812 containerd[1481]: time="2026-01-17T00:21:29.529724280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:21:29.529974 containerd[1481]: time="2026-01-17T00:21:29.529862945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:29.530245 kubelet[2532]: E0117 00:21:29.530201 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:29.530610 kubelet[2532]: E0117 00:21:29.530284 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:29.534898 kubelet[2532]: E0117 00:21:29.534849 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-m249n_calico-system(a3c98a2f-bcb2-4019-8b39-98c736ccd677): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:29.535053 kubelet[2532]: E0117 00:21:29.534917 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:21:32.110635 systemd[1]: Started sshd@13-165.232.147.124:22-4.153.228.146:35760.service - OpenSSH per-connection server daemon (4.153.228.146:35760). Jan 17 00:21:32.510306 sshd[5354]: Accepted publickey for core from 4.153.228.146 port 35760 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:32.512661 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:32.518213 systemd-logind[1452]: New session 14 of user core. Jan 17 00:21:32.521476 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:21:32.961392 sshd[5354]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:32.969243 systemd[1]: sshd@13-165.232.147.124:22-4.153.228.146:35760.service: Deactivated successfully. Jan 17 00:21:32.969615 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:21:32.974128 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:21:32.977199 systemd-logind[1452]: Removed session 14. Jan 17 00:21:33.165867 containerd[1481]: time="2026-01-17T00:21:33.165102685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:33.524508 containerd[1481]: time="2026-01-17T00:21:33.524438795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:33.526984 containerd[1481]: time="2026-01-17T00:21:33.525502743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:33.526984 containerd[1481]: time="2026-01-17T00:21:33.525593416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:33.527210 kubelet[2532]: E0117 00:21:33.525793 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:33.527210 kubelet[2532]: E0117 00:21:33.525848 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:33.527210 kubelet[2532]: E0117 00:21:33.525931 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-599ddd4698-rsvbr_calico-apiserver(7cd0417c-a83c-4bf0-96f8-9680bbeb055b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:33.527210 kubelet[2532]: E0117 00:21:33.525965 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:21:35.162027 containerd[1481]: time="2026-01-17T00:21:35.160605532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:21:35.522006 containerd[1481]: time="2026-01-17T00:21:35.521823123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:35.522934 containerd[1481]: time="2026-01-17T00:21:35.522849071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:21:35.523033 containerd[1481]: time="2026-01-17T00:21:35.522884836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:21:35.525282 kubelet[2532]: E0117 00:21:35.523470 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:35.525282 kubelet[2532]: E0117 00:21:35.523541 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:35.525282 kubelet[2532]: E0117 00:21:35.523765 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b849fdd9-cjl8m_calico-system(8f412fc1-0816-4220-80a1-194b624badc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:35.525904 containerd[1481]: time="2026-01-17T00:21:35.523902054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:21:35.882685 containerd[1481]: time="2026-01-17T00:21:35.882610427Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:35.884197 containerd[1481]: time="2026-01-17T00:21:35.884101790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:21:35.884406 containerd[1481]: time="2026-01-17T00:21:35.884124718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:21:35.884548 kubelet[2532]: E0117 00:21:35.884485 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:35.884622 kubelet[2532]: E0117 00:21:35.884562 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:35.884786 kubelet[2532]: E0117 00:21:35.884746 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:35.886288 containerd[1481]: time="2026-01-17T00:21:35.885908885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:21:36.195355 containerd[1481]: time="2026-01-17T00:21:36.195205090Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:36.196144 containerd[1481]: time="2026-01-17T00:21:36.196069233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:21:36.196228 containerd[1481]: time="2026-01-17T00:21:36.196173631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:36.196807 kubelet[2532]: E0117 00:21:36.196398 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:36.196807 kubelet[2532]: E0117 00:21:36.196452 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:36.196807 kubelet[2532]: E0117 00:21:36.196622 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b849fdd9-cjl8m_calico-system(8f412fc1-0816-4220-80a1-194b624badc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:36.196947 kubelet[2532]: E0117 00:21:36.196665 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8" Jan 17 00:21:36.197489 containerd[1481]: time="2026-01-17T00:21:36.197232355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:21:36.534326 containerd[1481]: time="2026-01-17T00:21:36.534129211Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:36.535267 containerd[1481]: time="2026-01-17T00:21:36.535180918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:21:36.535615 containerd[1481]: time="2026-01-17T00:21:36.535316641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:21:36.535684 kubelet[2532]: E0117 00:21:36.535621 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:36.536110 kubelet[2532]: E0117 00:21:36.535691 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:36.536110 kubelet[2532]: E0117 00:21:36.535798 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pbn95_calico-system(785ca1fd-8ad2-4e63-be23-ced8350e2045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:36.536110 kubelet[2532]: E0117 00:21:36.535863 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:21:37.165774 containerd[1481]: time="2026-01-17T00:21:37.164549253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:21:37.471528 containerd[1481]: time="2026-01-17T00:21:37.471393744Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:37.475045 containerd[1481]: time="2026-01-17T00:21:37.473995803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:21:37.475045 containerd[1481]: time="2026-01-17T00:21:37.474101063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:37.475230 kubelet[2532]: E0117 00:21:37.474312 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:37.475230 kubelet[2532]: E0117 00:21:37.474362 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:37.475230 kubelet[2532]: E0117 00:21:37.474627 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-67d567dbb6-j2ffv_calico-system(75ed374b-4149-4248-8b00-b1cb0ceb9572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:37.475230 kubelet[2532]: E0117 00:21:37.474663 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:21:37.477137 containerd[1481]: time="2026-01-17T00:21:37.475709585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:37.783931 containerd[1481]: time="2026-01-17T00:21:37.783604934Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:37.787407 containerd[1481]: time="2026-01-17T00:21:37.786303511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:37.787407 containerd[1481]: time="2026-01-17T00:21:37.786455041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:37.788368 kubelet[2532]: E0117 00:21:37.787662 2532 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:37.788368 kubelet[2532]: E0117 00:21:37.787724 2532 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:37.788368 kubelet[2532]: E0117 00:21:37.787823 2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-599ddd4698-s5fxr_calico-apiserver(b34f9844-1f24-4158-8f3c-e8308ca5c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:37.788368 kubelet[2532]: E0117 00:21:37.787870 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:21:38.037759 systemd[1]: Started sshd@14-165.232.147.124:22-4.153.228.146:55612.service - OpenSSH per-connection server daemon (4.153.228.146:55612). Jan 17 00:21:38.453915 sshd[5371]: Accepted publickey for core from 4.153.228.146 port 55612 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:38.455039 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:38.463394 systemd-logind[1452]: New session 15 of user core. Jan 17 00:21:38.467969 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:21:38.884704 sshd[5371]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:38.893408 systemd[1]: sshd@14-165.232.147.124:22-4.153.228.146:55612.service: Deactivated successfully. Jan 17 00:21:38.896615 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:21:38.900461 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:21:38.903435 systemd-logind[1452]: Removed session 15. Jan 17 00:21:39.159559 kubelet[2532]: E0117 00:21:39.159424 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:21:43.963179 systemd[1]: Started sshd@15-165.232.147.124:22-4.153.228.146:55614.service - OpenSSH per-connection server daemon (4.153.228.146:55614). Jan 17 00:21:44.162387 kubelet[2532]: E0117 00:21:44.162330 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:21:44.407748 sshd[5387]: Accepted publickey for core from 4.153.228.146 port 55614 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:44.410938 sshd[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:44.418234 systemd-logind[1452]: New session 16 of user core. Jan 17 00:21:44.428680 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:21:44.828548 sshd[5387]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:44.836535 systemd[1]: sshd@15-165.232.147.124:22-4.153.228.146:55614.service: Deactivated successfully. Jan 17 00:21:44.836716 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:21:44.841021 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:21:44.842755 systemd-logind[1452]: Removed session 16. Jan 17 00:21:44.911756 systemd[1]: Started sshd@16-165.232.147.124:22-4.153.228.146:37298.service - OpenSSH per-connection server daemon (4.153.228.146:37298). Jan 17 00:21:45.320295 sshd[5400]: Accepted publickey for core from 4.153.228.146 port 37298 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:45.320825 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:45.334988 systemd-logind[1452]: New session 17 of user core. Jan 17 00:21:45.339647 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:21:45.852241 sshd[5400]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:45.871703 systemd[1]: sshd@16-165.232.147.124:22-4.153.228.146:37298.service: Deactivated successfully. Jan 17 00:21:45.875685 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:21:45.877590 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:21:45.880973 systemd-logind[1452]: Removed session 17. Jan 17 00:21:45.935614 systemd[1]: Started sshd@17-165.232.147.124:22-4.153.228.146:37302.service - OpenSSH per-connection server daemon (4.153.228.146:37302). Jan 17 00:21:46.392159 sshd[5411]: Accepted publickey for core from 4.153.228.146 port 37302 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:46.394844 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:46.402703 systemd-logind[1452]: New session 18 of user core. Jan 17 00:21:46.409940 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:21:47.159296 kubelet[2532]: E0117 00:21:47.159226 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:21:47.165658 kubelet[2532]: E0117 00:21:47.165591 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:21:47.711244 sshd[5411]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:47.725662 systemd[1]: sshd@17-165.232.147.124:22-4.153.228.146:37302.service: Deactivated successfully. Jan 17 00:21:47.733575 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:21:47.736161 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:21:47.739293 systemd-logind[1452]: Removed session 18. Jan 17 00:21:47.782729 systemd[1]: Started sshd@18-165.232.147.124:22-4.153.228.146:37316.service - OpenSSH per-connection server daemon (4.153.228.146:37316). Jan 17 00:21:48.161599 kubelet[2532]: E0117 00:21:48.161357 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:21:48.225870 sshd[5427]: Accepted publickey for core from 4.153.228.146 port 37316 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:48.228603 sshd[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:48.239622 systemd-logind[1452]: New session 19 of user core. Jan 17 00:21:48.242566 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:21:48.899636 sshd[5427]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:48.908065 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:21:48.909052 systemd[1]: sshd@18-165.232.147.124:22-4.153.228.146:37316.service: Deactivated successfully. Jan 17 00:21:48.911935 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:21:48.913720 systemd-logind[1452]: Removed session 19. Jan 17 00:21:48.977675 systemd[1]: Started sshd@19-165.232.147.124:22-4.153.228.146:37330.service - OpenSSH per-connection server daemon (4.153.228.146:37330). Jan 17 00:21:49.406312 sshd[5440]: Accepted publickey for core from 4.153.228.146 port 37330 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:49.408423 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:49.418209 systemd-logind[1452]: New session 20 of user core. Jan 17 00:21:49.422576 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:21:49.830548 sshd[5440]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:49.840631 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:21:49.844671 systemd[1]: sshd@19-165.232.147.124:22-4.153.228.146:37330.service: Deactivated successfully. Jan 17 00:21:49.847491 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:21:49.853169 systemd-logind[1452]: Removed session 20. Jan 17 00:21:50.163098 kubelet[2532]: E0117 00:21:50.162981 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:21:51.160509 kubelet[2532]: E0117 00:21:51.160449 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67d567dbb6-j2ffv" podUID="75ed374b-4149-4248-8b00-b1cb0ceb9572" Jan 17 00:21:51.161734 kubelet[2532]: E0117 00:21:51.161599 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8" Jan 17 00:21:54.923056 systemd[1]: Started sshd@20-165.232.147.124:22-4.153.228.146:33990.service - OpenSSH per-connection server daemon (4.153.228.146:33990). Jan 17 00:21:55.370444 sshd[5455]: Accepted publickey for core from 4.153.228.146 port 33990 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:55.372645 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:55.381408 systemd-logind[1452]: New session 21 of user core. Jan 17 00:21:55.386814 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:21:55.839835 sshd[5455]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:55.845155 systemd[1]: sshd@20-165.232.147.124:22-4.153.228.146:33990.service: Deactivated successfully. Jan 17 00:21:55.850906 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:21:55.855377 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:21:55.856605 systemd-logind[1452]: Removed session 21. Jan 17 00:21:58.165184 kubelet[2532]: E0117 00:21:58.165131 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m249n" podUID="a3c98a2f-bcb2-4019-8b39-98c736ccd677" Jan 17 00:22:00.913683 systemd[1]: Started sshd@21-165.232.147.124:22-4.153.228.146:34006.service - OpenSSH per-connection server daemon (4.153.228.146:34006). Jan 17 00:22:01.161345 kubelet[2532]: E0117 00:22:01.161166 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-s5fxr" podUID="b34f9844-1f24-4158-8f3c-e8308ca5c340" Jan 17 00:22:01.163731 kubelet[2532]: E0117 00:22:01.161193 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599ddd4698-rsvbr" podUID="7cd0417c-a83c-4bf0-96f8-9680bbeb055b" Jan 17 00:22:01.354719 sshd[5493]: Accepted publickey for core from 4.153.228.146 port 34006 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:22:01.358578 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:01.366418 systemd-logind[1452]: New session 22 of user core. Jan 17 00:22:01.371598 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:22:01.787773 sshd[5493]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:01.792934 systemd[1]: sshd@21-165.232.147.124:22-4.153.228.146:34006.service: Deactivated successfully. Jan 17 00:22:01.793446 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:22:01.798022 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:22:01.800564 systemd-logind[1452]: Removed session 22. Jan 17 00:22:02.163437 kubelet[2532]: E0117 00:22:02.163389 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pbn95" podUID="785ca1fd-8ad2-4e63-be23-ced8350e2045" Jan 17 00:22:03.161453 kubelet[2532]: E0117 00:22:03.161372 2532 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b849fdd9-cjl8m" podUID="8f412fc1-0816-4220-80a1-194b624badc8"